Skip to content

langtorch.tt

The building blocks of pytorch models are Modules from torch.nn. The langtorch equivalent is the langtorch.tt which contains TextModules. The goal of this parallel is to systematize the emerging LLM app architectures in terms of the existing neural network architectures they share a topology with. For example tree summarization, a method for summarizing longer texts by summarizing it in chunks of e.g. 5 paragraphs, can be equated to a convolutional layer with a "Summarize" kernel and stride 5 (Conv TextModules coming soon btw).

This philosophy is what motivates the use of the term "activation" for Modules that are LLM calls. It allows thinking of distinct steps of a sequential LLM program as subsequent "layers", where within a layer a lot of textual tasks can be done in parallel, and matrix algebra used in TextTensors helps quickly set up all prompts for these parallel tasks. For example, we can use a linear "layer", called tt.Linear, to format parts of the prompt separately. As a row of the weight matrix in a linear operation on a vector is defined as:

TextTensor([[w1, w2, w3]]) @ TextTensor([[t1],[t2],[t3]])
# is equal to: (w1*t1 + w2*t2 + w3*t3), so we can format:
prompt = "Which of these texts is most clear? "
task_on_3_texts = langtorch.tt.Linear([[prompt+"Text 1: ", "Text 2: ", "Text 3: "]])
texts = TextTensor([text1, text2, text3]).reshape(-1, 1)

result = task_on_3_texts(texts)

Other subclasses include ChatModule which keeps "conversation history" by always outputting input+my_output and setting the key of my output to assistant. Learn about chat Text entries here.

Some experimental TextModules available in langtorch.tt

  • ``