Image of Groq

Groq

Organization

A startup building proprietary hardware for AI, co-founded by Jonathan Ross. Chamath funded it in 2016 and sees it as a key 'picks and shovels' player.


First Mentioned

1/1/2026, 5:25:16 AM

Last Updated

1/4/2026, 3:45:35 AM

Research Retrieved

1/1/2026, 5:28:32 AM

Summary

Groq, Inc. is a Mountain View-based artificial intelligence company specializing in high-performance AI inference hardware and software. Founded in 2016 by Jonathan Ross, the inventor of Google's Tensor Processing Unit (TPU), and Douglas Wightman, the company developed the Language Processing Unit (LPU) architecture to optimize large language model (LLM) performance. Groq's technology focuses on the 'decode' phase of AI processing, offering low-latency inference that complements Nvidia's 'prefill' dominance. In December 2025, Groq entered a landmark $20 billion agreement with Nvidia to license its inference technology and transfer several senior executives, while maintaining its status as an independent entity. The company has expanded globally with offices in the U.S., Canada, and the U.K., serving over 2.5 million developers.

Research Data
Extracted Attributes
  • Founded

    2016

  • Revenue

    US$3.2 million (2023)

  • Founders

    Jonathan Ross and Douglas Wightman

  • Industry

    Semiconductor and Artificial Intelligence

  • Products

    Language Processing Unit (LPU), GroqCard, GroqRack

  • Key People

    Jonathan Ross (CEO), Sunny Madra (COO), Chamath Palihapitiya (Investor)

  • Headquarters

    Mountain View, California, US

  • Employee Count

    201-500 employees

  • Office Locations

    San Jose (CA), Liberty Lake (WA), Toronto (Canada), London (U.K.)

Timeline
  • Groq is founded by former Google engineers to build AI accelerator ASICs. (Source: Wikipedia)

    2016-01-01

  • Groq receives $10 million in seed funding from Social Capital. (Source: Web Search Results)

    2017-01-01

  • Groq reports annual revenue of US$3.2 million. (Source: Wikipedia)

    2023-12-31

  • Nvidia and Groq announce a $20 billion deal to license inference technology and transfer senior executives to Nvidia. (Source: All-In Podcast and Wikipedia)

    2025-12-01

Groq

Groq, Inc. is an American artificial intelligence (AI) company that builds an AI accelerator application-specific integrated circuit (ASIC). The architecture was originally introduced as a Tensor Streaming Processor (TSP) but was later rebranded as a Language Processing Unit (LPU) following the widespread adoption of large language models after the breakthrough of ChatGPT. The company also develops related computer hardware and software to accelerate AI inference performance. Examples of the types of AI workloads that run on Groq's LPU are: large language models (LLMs), image classification, and predictive analysis. Groq is headquartered in Mountain View, CA, and has offices in San Jose, CA, Liberty Lake, WA, Toronto, Canada, London, U.K. and remote employees throughout North America and Europe. In December 2025, Nvidia and Groq announced an agreement reportedly valued at approximately US$20 billion to license Groq's AI inference technology and to transfer several senior Groq executives to Nvidia. Groq stated that it would continue to operate as an independent company.

Web Search Results
  • Groq

    Groq, Inc. is an American artificial intelligence (AI) company that builds an AI accelerator application-specific integrated circuit (ASIC). The architecture was originally introduced as a Tensor Streaming Processor (TSP) but was later rebranded as a Language Processing Unit (LPU) following the widespread adoption of large language models after the breakthrough of ChatGPT. The company also develops related computer hardware and software to accelerate AI inference performance. [...] Groq was founded in 2016 by a group of former Google engineers, led by Jonathan Ross, one of the designers of the Tensor Processing Unit (TPU), an AI accelerator ASIC, and Douglas Wightman, an entrepreneur and former engineer at Google X (known as X Development), who served as the company’s first CEO. Groq received seed funding from Social Capital "Social Capital (venture capital)")'s Chamath Palihapitiya, with a $10 million investment in 2017 and soon after secured additional funding. [...] | Company type | Private | | Industry | Semiconductor industry Artificial Intelligence Cloud computing | | Founded | 2016; 9 years ago (2016) | | Founders | Jonathan Ross | | Headquarters | Mountain View, California , US | | Key people | Jonathan Ross (CEO), Sunny Madra (COO), Andrew S. Rappaport (Board Member), Chamath Palihapitiya (Investor), John Yetimoglu (Board Member/Investor) | | Products | Language Processing Unit (LPU) | | Revenue | US$3.2 million (2023) |

  • Leadership Team

    Jonathan Ross is the CEO and founder of Groq. Groq is the innovator of the novel Tensor Streaming Processor compute architecture, accelerating workloads in AI, ML, and HPC through their product portfolio ranging from GroqCard™ to GroqRack™. Prior to founding Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first generation TPU chip. Jonathan next joined Google X’s Rapid Eval Team, the initial stage [...] Jonathan Ross is the CEO and founder of Groq. Groq is the innovator of the novel Tensor Streaming Processor compute architecture, accelerating workloads in AI, ML, and HPC through their product portfolio ranging from GroqCard™ to GroqRack™. Prior to founding Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first generation TPU chip. Jonathan next joined Google X’s Rapid Eval Team, the initial stage [...] skip to content Start Building # About Us ## Designed for inference. Not adapted for it. Established in 2016 for inference, Groq is literally built different. It’s the only custom-built inference chip that fuels developers with the performance they need at a cost that doesn’t hold them back. ## The Builders Who Never Stopped Building At Groq, high performance is the baseline, and it's contagious. It starts at the top.

  • Groq vs. Grok: What's Happening and Why It Matters | by James Fahey

    • Reports about a $20 billion acquisition or licensing + talent deal have circulated (though official terms may vary). • Groq continues to operate as a separate company even as some executives transition to Nvidia. What is Groq? Groq, Inc. is an AI hardware company founded by Jonathan Ross. It builds specialized Language Processing Units (LPUs) — chips optimized for running large language models and other AI inference workloads extremely fast and efficiently. [...] 4. Do Groq and Musk’s Grok Have Anything to Do With Each Other? Short answer: No — they are entirely separate. • Groq is a hardware/semiconductor company. • Grok is an AI chatbot product from Musk’s xAI. Their similarity in name has led to confusion — Groq has even publicly pointed it out. Other than a little social media banter and occasional trademark-related commentary, there is no corporate, technical, or strategic link between Elon Musk’s Grok chatbot and the Groq AI chip startup. [...] AI hardware company / chip maker • Builds processors to run AI models faster Grok (with a ‘k’) AI software/chatbot product • A consumer AI assistant developed by Elon Musk’s xAI The similar names have caused confusion — and even a public call from Groq’s CEO telling Elon to rethink the name due to potential trademark and branding issues. ⸻ 3. What Jonathan Ross Has Been Saying Jonathan Ross (Groq’s CEO) has used his social media channels to highlight:

  • What is Groq? - Prompt Engineering Guide

    Groq is one of those LLM inference companies that claim, at the time of writing this post, 18x faster inference performance on Anyscale's LLMPerf Leaderboard (opens in a new tab) compared to other top cloud-based providers. Groq currently makes available models like Meta AI's Llama 2 70B and Mixtral 8x7B via their APIs. These models are powered by Groq LPU™ Inference Engine which is built with their own custom hardware designed for running LLMs called language processing units (LPUs). [...] Groq (opens in a new tab) recently made a lot of headlines as one of the fastest LLM inference solutions available today. There is a lot of interest from LLM practitioners to reduce the latency in LLM responses. Latency is an important metric to optimize and enable real-time AI applications. There are many companies now in the space competing around LLM inference.

  • Groq

    # Groq Groq is fast, low cost inference. The Groq LPU delivers inference with the speed and cost developers need. Semiconductor Manufacturing • Mountain View, California • 181,121 followers • 201-500 employees ## Overview Groq is the AI inference platform delivering low cost, high performance without compromise. Its custom LPU and cloud infrastructure run today’s most powerful AI models instantly and reliably. Over 2.5 million developers use Groq to build fast and scale with confidence. [...] inference could not keep up with real biotech workloads. Groq ’s production-grade inference changed that. It let Raycaster run huge parallel jobs in real time. Document scans felt instant. Research tasks streamed results as they happened. Groq turned Raycaster from a promising tool into a live, interactive system. Today, Raycaster helps biotechs move faster than industry norms and avoid the silent mistakes that delay lifesaving drugs. And it’s only the beginning of what AI-native drug