Fractile

Fractile

Software Development

London, London 2,819 followers

Fractile is building chips that remove every bottleneck to running large language models at a global scale

About us

Fractile is building chips to run large language models two orders of magnitude faster. Existing hardware is good for training LLMs, but very poorly suited to subsequent inference of the trained model, which is increasingly the dominant workload. A network’s weights need to be moved onto a chip once per word generated, and this movement takes a few hundred times longer than the subsequent computations themselves. Fractile’s revolutionary approach to fusing computation with memory eliminates this bottleneck, and can scale to allow running the world’s largest models at a global scale.

Website
https://www.fractile.ai/
Industry
Software Development
Company size
11-50 employees
Headquarters
London, London
Type
Privately Held
Founded
2022

Locations

Employees at Fractile

Updates

  • View organization page for Fractile, graphic

    2,819 followers

    We're proud to announce our partnership with Andes Technology Corporation! Andes has unmatched technical and commercial leadership on RISC-V vector processors and is a natural partner for Fractile as we build our AI inference accelerator systems. AI workloads change on a daily basis, and we're firm believers that software-programmable vector compute is a critical requirement for any AI accelerator to stay robust to these changes. Fractile integrates this into a unique in-memory compute architecture and software stack that will unlock 100x faster inference of frontier AI models in the data centre!

    Fractile Licenses Andes Technology's RISC-V Vector Processor as It Builds Radical New Chip to Accelerate AI Inference

    Fractile Licenses Andes Technology's RISC-V Vector Processor as It Builds Radical New Chip to Accelerate AI Inference

    design-reuse.com

  • View organization page for Fractile, graphic

    2,819 followers

    Today, Fractile exited stealth and announced $15m in seed funding. The round was co-led by Kindred Capital VC, NATO Innovation Fund (NIF) and Oxford Science Enterprises, with participation from Inovia Capital and Cocoa 🍫, as well as angel investors including Hermann Hauser and Stan Boland. This investment will enable us to expand our team across silicon, software and AI, build partnerships, and accelerate progress towards our first product groundbreaking AI chip, capable of running state-of-the-art AI models up to 100x faster and 10x cheaper than existing hardware. To find out more, read our CEO, Walter Goodwin’s, exclusive interview with Jeremy Kahn at Fortune: https://lnkd.in/esdzV-rr

    Can this tiny U.K. AI chip company best Nvidia?

    Can this tiny U.K. AI chip company best Nvidia?

    fortune.com

  • View organization page for Fractile, graphic

    2,819 followers

    If this sounds interesting - we're hiring! 🚀

    View profile for Walter Goodwin, graphic

    CEO & Founder, AI hardware @ Fractile

    The long awaited multi-modal LLM, Google Gemini, has just been released, and the report (which lands somewhere between an informative whitepaper and OpenAI's fully neutered GPT-4 marketing pdf in terms of what it shares) shows that it is, as anticipated, another big transformer model. Transformers may not be the endgame architecture, but I think there's a very strong case to be made that the massive matrix-vector multiplications that dominate transformer operations will be a hallmark of anything that comes next, too. Some people feel our leading model architectures should become less monolithic, more heterogeneous, more neuromorphic. My view is we won't see this any time soon. I think Sutton's "Bitter Lesson" essay is the Moore's Law of AI, and in a nutshell it says that, time and again over 70 years of AI, general purpose approaches, coupled with extreme scale, have won out against methods that we painstakingly bake our own structures and priors into. Matrix-vector multiplications are about as un-human a way to make decisions as you could ask for, yet are unparalleled as a parameter sponge for soaking up scale! 🚀 A relevant plug: At Fractile we are building accelerator chips to run models made up of massive matrices 100x faster, 200x cheaper that Nvidia GPUs - get in touch if you are interested; we're hiring for chip design and software roles! 🚀 Gemini report: https://lnkd.in/ejy5dQmY Sutton's Bitter Lesson: https://lnkd.in/ef48xDGC

    • No alternative text description for this image
  • View organization page for Fractile, graphic

    2,819 followers

    Have a read of Walter's examination of the AI hardware cost crisis that would hit if we rolled LLM copilots out on a global scale! There's a problem here, and we're hiring experienced engineers and scientists across semiconductor and machine learning roles to solve it at Fractile. If you'd like to work with brilliant people and be part of an ambitious project to re-invent the way we run AI compute, get in touch with your CV at hello@fractile.ai, or reach out to us on LinkedIn!

    View profile for Walter Goodwin, graphic

    CEO & Founder, AI hardware @ Fractile

    🤯 What a weekend in AI. Like many, I'm excited to see what will happen next with Sam Altman and Greg Brockman joining Microsoft. Microsoft is uniquely positioned to amplify much of what is brilliant about LLMs and roll out an intelligent layer over software used by 1.2 billion people (MS Office). If the computer is the bicycle for the mind, then seamless intelligent copilots in our word processors and spreadsheet editors are the e-bike revolution. We have a big obstacle to get here, though: hardware. A case study: Github Copilot - with 1.2 million users, it's 0.1% of the MS Office userbase, and is already reportedly a big compute cost headache for Microsoft, costing more to run per user than the $19/month fee. Little wonder that Microsoft just announced they'll be charging $30/month to for 365 Copilot. Yes, that's nearly $0.5 trillion a year if every user signed up...0.5% of global GDP! A majority of this would be eaten up by MS's hardware costs, though. Clearly this would not be sustainable. If the way we build hardware for AI doesn't fundamentally change, we won't ever see it deployed at these scales; it'll be an expensive tool acting to further the opportunity gap between those that can afford it and those that can't. At Fractile we're building the silicon that will enable us to run the world's state of the art language models, 100x faster and 1000x cheaper. If you're interested in learning more, send me a message! 🚀

Similar pages

Funding

Fractile 2 total rounds

Last Round

Grant

US$ 6.5M

See more info on crunchbase