
META AI’s Code LlaMA
On August 24th, Meta AI released Code LlaMA, a large language model (LLM) that can use text prompts to generate code.
Code LlaMA is a new family of SOTA Coding models from Meta AI that has revolutionized the landscape of coding LLMs in a matter of days.
Key features:
- Three models:
- Code LlaMA, the foundational code model
- Code LlaMA – Python, specialized for Python
- Code LlaMA – Instruct, which is fine-tuned for understanding natural language instructions
- Three versions: 7B, 13B, and 34B. There is no 65B or 70B version.
- Capable of generating code, and natural language about code, from both code and natural language prompts
- Support for 100K context size, which is huge and very useful for coding (accounts for approx.1250 lines of code)
- Free for both research and commercial use!
Benefits:
- Speed and efficiency: Code LlaMA can help you save time and improve efficiency by automating tasks, such as code generation and completion.
- Learning curve: Code LlaMA lowers the barrier of entry for people who either are learning or don’t know how to code by providing them with a tool that can help them generate code and debug their code.
- Safety: Code LlaMA is designed to be safe to use, and it has been tested thoroughly for security and reliability.
- Scalability: Code LlaMA is designed to be scalable, so it can be used by businesses of all sizes and for applications with various degrees of complexity.
- Revolutionizing coding: Code LlaMA has the potential to revolutionize the way that code is developed, written and used by making it easier, faster, and more efficient to generate and debug code.
Overall, Code LlaMA is a powerful tool that can be used for a variety of code generation tasks. While it is still under development, it has the potential to revolutionize the way that code is written and used.
Other AI News
- Others:
- Korean internet giant Naver has entered the generative AI arena with the launch of HyperCLOVA X, its advanced large language model (LLM). The new model powers CLOVA X, a question-answering chatbot now in beta testing for English and Korean. Aimed at enterprise users, HyperCLOVA X allows customization based on proprietary data. The model builds upon its predecessor, HyperCLOVA, enhancing its capabilities and addressing the limitations of previous LLMs, such as “hallucinations” or providing inaccurate or out-of-date information.
Naver also plans to integrate an AI feature called Cue into its search engine by November 2023, similar to Microsoft’s integration with Bing. Though specific parameter counts for HyperCLOVA X are undisclosed, Naver notes that the model has been trained on 6,500 times more Korean data than OpenAI’s ChatGPT, making it highly adapted to the local Korean language and culture.
Future developments for HyperCLOVA X include multimodal capabilities, enabling it to generate not just text but also images, videos, and sounds. The model will be open for customization across various industries, providing tailored solutions in fields like customer service and marketing. Naver outlines that its new model can automate customer inquiry classification and even generate marketing content specific to company characteristics.
- AI startup Modular has secured $100 million in a funding round spearheaded by General Catalyst, with contributions from GV (Google Ventures), SV Angel, Greylock, and Factory. The company aims to revolutionize AI infrastructure for developers worldwide through its Modular AI runtime engine and Mojo, its specialized programming language for AI.
Co-founders Chris Lattner and Tim Davis, both former Google employees involved with TensorFlow initiatives, have developed Modular’s engine as a compatible alternative to existing execution frameworks for PyTorch and TensorFlow. While the engine currently focuses on AI inference, plans are underway to extend its capabilities to training workloads.
- OpenAI has designated Scale AI as its “preferred partner” for optimizing the performance of its GPT-3.5 Turbo large language model (LLM). The announcement comes a day after OpenAI unveiled new built-in support enabling enterprises to fine-tune GPT-3.5 with proprietary data. Alexandr Wang, CEO of Scale AI, emphasized the importance of such fine-tuning for achieving accurate and efficient AI performance. The collaboration aims to integrate OpenAI’s advanced GPT-3.5 base model with Scale’s industry-leading Data Engine to expedite the development of customized, state-of-the-art models tailored to specific business needs.
Scale AI, which already specializes in fine-tuning a variety of commercial and open-source models, expressed enthusiasm about leveraging OpenAI’s APIs to serve an even broader enterprise market.
- AI startup Hugging Face has confirmed a $235 million funding round, lifting its valuation to $4 billion. Led by Salesforce, the investment pool also features tech heavyweights like Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, and Sound Ventures. The newly acquired funds will be directed towards team expansion and the enhancement of open-source AI and collaboration platforms.
Hugging Face CEO Clement Delangue expressed that the round’s diverse investor lineup highlights the broad industry support for open-source AI and cements Hugging Face’s pivotal role in the AI ecosystem. With over a million repositories, including 500,000 models, 250,000 datasets, and 250,000 spaces, the company has grown substantially, up from 300,000 repositories earlier this year.
Delangue emphasized the unconditional nature of the funding, noting its importance to the company. The investment aligns with Salesforce’s strategy to deepen its focus on open-source AI solutions, particularly aimed at satisfying the growing AI needs of its enterprise customer base.
- Nvidia bets $25 Billion on Sustained AI Growth. Nvidia’s CEO Jensen Huang expressed bullish expectations for the sustained growth of the artificial intelligence (AI) sector, substantiating his optimism with a $25 billion share buyback plan. The move, generally indicative of a belief that the company is undervalued, exceeds Wall Street forecasts and outpaces similar investments in AI by other major tech firms.
Nvidia aims to continue ramping up production of its hardware, dispelling scepticism about the longevity of the current AI optimism surge. The company holds a dominant market position in computing systems essential for AI services like OpenAI’s ChatGPT. Huang attributes the surging demand to a shift away from traditional data centres towards centres built around Nvidia’s robust chips, as well as the increasing adoption of AI-generated content across various sectors.
Despite a recent dip in its price-to-earnings ratio, Nvidia remains undeterred, with one analyst highlighting the complexity and value of their HGX system—an entire computer constructed around their chip. The system, priced at $200,000 and comprising 35,000 components, has emerged as the key driver of the company’s sales this quarter, the analyst confirmed.
- Germany aims to Double AI Investment in Competition with China, U.S.
In an effort to narrow its AI research gap with global leaders China and the United States, Germany plans to nearly double its public funding for artificial intelligence to close to a billion euros over the next two years. Announced by Research Minister Bettina Stark-Watzinger, this move aims to bolster the nation’s economy and competitive edge in key sectors like automotive and chemicals.
While Germany’s investment pales in comparison to the U.S. government’s $3.3 billion AI research outlay in 2022, Stark-Watzinger highlighted that Europe’s more stringent regulatory framework on privacy and safety could make Germany an attractive destination for AI investment. She added that simpler regulations would also incentivize private research spending.
Germany’s strategic plan includes the establishment of 150 new university labs dedicated to AI research and the expansion of data centers. It aims to democratize access to complex public data sets, a notable initiative in a country where traditional methods like cash transactions and faxes are still prevalent.
Despite this surge in investment, Germany trails significantly in global AI spending. U.S. private investment in the sector reached $47.4 billion in 2022, dwarfing Europe’s total and China’s $13.4 billion. However, Stark-Watzinger noted that the number of AI startups in Germany has doubled in 2023, although the country still ranks ninth globally in that metric.
About The Author

Bogdan Iancu
Bogdan Iancu is a seasoned entrepreneur and strategic leader with over 25 years of experience in diverse industrial and commercial fields. His passion for AI, Machine Learning, and Generative AI is underpinned by a deep understanding of advanced calculus, enabling him to leverage these technologies to drive innovation and growth. As a Non-Executive Director, Bogdan brings a wealth of experience and a unique perspective to the boardroom, contributing to robust strategic decisions. With a proven track record of assisting clients worldwide, Bogdan is committed to harnessing the power of AI to transform businesses and create sustainable growth in the digital age.
Leave A Comment