llama 2

Technology

An open-source large language model released by Facebook (Meta). While praised for being open, it's noted to have thresholds on the number of users before requiring a license from Facebook.


First Mentioned

1/11/2026, 5:29:59 AM

Last Updated

1/11/2026, 5:41:30 AM

Research Retrieved

1/11/2026, 5:41:30 AM

Summary

Llama 2 is a family of large language models (LLMs) developed by Meta AI, released in July 2023 as a successor to the original Llama. It features models ranging from 7 billion to 70 billion parameters and introduced instruction fine-tuned versions, known as Llama-2-Chat, alongside foundational models. Unlike its predecessor, Llama 2 was made available for commercial use for organizations with fewer than 700 million active users, as well as for research. The model's development and open-source nature have been central to debates regarding AI regulation, specifically the Biden administration's executive order on AI, which critics like David Friedberg argue could stifle innovation and create regulatory capture for incumbents. Meta has since integrated subsequent versions, such as Llama 3, into its core services like Facebook and WhatsApp.

Referenced in 1 Document
Research Data
Extracted Attributes
  • License

    Meta Llama 2 License (Commercial use allowed for <700M active users)

  • Developer

    Meta AI

  • Architecture

    Auto-regressive transformer

  • Release Date

    2023-07-18

  • Training Data

    2 trillion tokens

  • Context Window

    4,096 tokens

  • Parameter Sizes

    7 billion, 13 billion, 70 billion

Timeline
  • Meta AI releases the original Llama model (Llama 1) for research purposes. (Source: Wikipedia)

    2023-02-24

  • Meta and Microsoft announce the release of Llama 2 for research and commercial use. (Source: Web Search)

    2023-07-18

  • President Biden issues the Executive Order on AI, which is critiqued for its potential impact on open-source models like Llama 2. (Source: 7ecebfd6-9d29-4613-8e8d-9eb9568f5bef)

    2023-10-30

  • Release of Llama 4, the latest version in the Llama family. (Source: Wikipedia)

    2025-04-01

Llama (language model)

Llama ("Large Language Model Meta AI" serving as a backronym) is a family of large language models (LLMs) released by Meta AI starting in February 2023. Llama models come in different sizes, ranging from 1 billion to 2 trillion parameters. Initially only a foundation model, starting with Llama 2, Meta AI released instruction fine-tuned versions alongside foundation models. Model weights for the first version of Llama were only available to researchers on a case-by-case basis, under a non-commercial license. Unauthorized copies of the first model were shared via BitTorrent. Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use. Alongside the release of Llama 3 and a standalone website, Meta added virtual assistant features to Facebook and WhatsApp in select regions; both services used a Llama 3 model. However, the latest version is Llama 4, released in April 2025.

Web Search Results
  • Llama (language model) - Wikipedia

    [edit&action=edit&section=5 "Edit section: Llama 2")] On July 18, 2023, in partnership with Microsoft, Meta announced Llama 2 (stylized as LLaMa 2), the next generation of Llama. Meta trained and released Llama 2 in three model sizes: 7, 13, and 70 billion parameters. The model architecture remains largely unchanged from that of Llama 1 models, but 40% more data was used to train the foundational models. [...] | | | This section does not cite any sources. Please help improve this section "Special:EditPage/Llama (language model)") by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2025) (Learn how and when to remove this message) | Llama 1 models are only available as foundational models with self-supervised learning and without fine-tuning. Llama 2 – Chat models were derived from foundational Llama 2 models. Unlike GPT-4 which increased context length during fine-tuning, Llama 2 and Code Llama - Chat have the same context length of 4K tokens. Supervised fine-tuning used an autoregressive loss function with token loss on user prompts zeroed out. The batch size was 64. [...] Llama ("Large Language Model Meta AI" serving as a backronym) is a family of large language models (LLMs) released by Meta AI starting in February 2023. Llama models come in different sizes, ranging from 1 billion to 2 trillion parameters. Initially only a foundation model, starting with Llama 2, Meta AI released instruction fine-tuned "Fine-tuning (deep learning)") versions alongside foundation models. Model weights for the first version of Llama were only available to researchers on a case-by-case basis, under a non-commercial license. Unauthorized copies of the first model were shared via BitTorrent. Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use.

  • meta-llama/Llama-2-7b - Hugging Face

    Log in or Sign Up to review the conditions and access this model content. # Llama 2 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model. Links to other models can be found in the index at the bottom. ## Model Details Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here. [...] Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. | | Training Data | Params | Content Length | GQA | Tokens | LR | --- --- --- | Llama 2 | A new mix of publicly available online data | 7B | 4k | ✗ | 2.0T | 3.0 x 10-4 | | Llama 2 | A new mix of publicly available online data | 13B | 4k | ✗ | 2.0T | 3.0 x 10-4 | | Llama 2 | A new mix of publicly available online data | 70B | 4k | ✔ | 2.0T | 1.5 x 10-4 | [...] Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. Model Developers Meta Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only.

  • What Is Llama 2? | IBM

    Artificial Intelligence # What is Llama 2? ## Authors Dave Bergmann Senior Staff Writer, AI Models IBM Think ## What is Llama 2? Llama 2 is a family of pre-trained and fine-tuned large language models (LLMs) released by Meta AI in 2023. Released free of charge for research and commercial use, Llama 2 AI models are capable of a variety of natural language processing (NLP) tasks, from text generation to programming code. [...] The Llama 2 model family, offered as both base foundation models and fine-tuned “chat” models, serves as the successor to the original LLaMa 1 models, which were released in 2022 under a noncommercial license granting access on a case-by-case basis exclusively to research institutions. Unlike their predecessors, Llama 2 models are available free of charge for both AI research and commercial use. [...] Greater context length: Llama 2 models offer a context length of 4,096 tokens, which is double that of LLaMa 1. The context length (or context window) refers to the maximum number of tokens the model can “remember” during inferencing (i.e. the generation of text or an ongoing conversation). This allows for greater complexity and a more coherent, fluent exchange of natural language. Greater accessibility: Whereas LLaMa 1 was released exclusively for research use, Llama 2 is available to any organization (with fewer than 700 million active users).

  • llama2 - Ollama

    5M Downloads Updated 2 years ago ## Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters. 7b 13b 70b ## Models View all → Name 102 models Size Context Input llama2:latest 3.8GB · 4K context window · Text · 2 years ago llama2:latest 3.8GB 4K Text llama2:7b latest 3.8GB · 4K context window · Text · 2 years ago llama2:7b latest 3.8GB 4K Text llama2:13b 7.4GB · 4K context window · Text · 2 years ago llama2:13b 7.4GB 4K Text llama2:70b 39GB · 4K context window · Text · 2 years ago llama2:70b 39GB 4K Text ## Readme [...] llama2:70b 39GB 4K Text ## Readme Llama 2 is released by Meta Platforms, Inc. This model is trained on 2 trillion tokens, and by default supports a context length of 4096. Llama 2 Chat models are fine-tuned on over 1 million human annotations, and are made for chat. ### CLI Open the terminal and run `ollama run llama2` ### API Example using curl: ``` curl -X POST -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }' ``` API documentation ## Memory requirements 7b models generally require at least 8GB of RAM 13b models generally require at least 16GB of RAM 70b models generally require at least 64GB of RAM If you run into issues with higher quantization levels, try using the q4 model or shut down any other programs that are using a lot of memory.

  • Llama 2: Open Foundation and Fine-Tuned Chat Models - arXiv

    View PDF > Abstract:In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs. [...] View a PDF of the paper titled Llama 2: Open Foundation and Fine-Tuned Chat Models, by Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier [...] View a PDF of the paper titled Llama 2: Open Foundation and Fine-Tuned Chat Models, by Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier

Location Data

2, Calle de la Llama, Los Brazos, Finca Los Rosales, La Navata, Galapagar, Comunidad de Madrid, 28420, España

detached

Coordinates: 40.5983278, -3.9837642

Open Map