META’s LlaMA3.1: Transformers Design using DALL-E3

Meta Unveils Llama 3.1: Its Largest Open AI Model Yet

Introduction

Meta has recently announced the release of its largest open-source AI model to date, Llama 3.1 405B, marking a significant advancement in the company’s AI capabilities. This model aims to rival and even surpass leading proprietary models like OpenAI’s GPT-4, offering extensive new features and benefits.

Features

  • Llama 3.1 405B Model: With 405 billion parameters, this is Meta’s largest model, designed to outperform major competitors.
  • Extended Context Window: All Llama 3.1 models feature a context window of 128,000 tokens, greatly enhancing their ability to handle longer conversations and larger documents.
  • Multilingual Support: The models support eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
  • Open-Source Accessibility: All models are open-source, allowing developers to freely access and utilize them for various applications.

Benefits

  • Superior Performance: Llama 3.1 405B has shown superior performance on multiple benchmarks, surpassing GPT-4 in several key areas.
  • Enhanced Usability: The expanded context window allows for more complex and detailed interactions without losing context, making it highly useful for tasks requiring long-term memory.
  • Customization and Flexibility: The open-source nature of the models allows for extensive customization and integration into diverse applications, from customer service to advanced research.
  • Community and Ecosystem Development: Meta encourages community engagement and collaboration through the Llama Stack, aiming to establish standardized interfaces for AI tools and applications.

Technical Details

  • Model Architecture: The Llama 3.1 models use a dense transformer architecture with significant upgrades over previous versions.
  • Training Data: Trained on over 15 trillion tokens, the models leverage synthetic data generation to enhance training efficiency and model accuracy.
  • Benchmarks: In tests like MMLU and GSM8K, Llama 3.1 405B consistently outperforms its predecessors and rivals, with the exception of a small margin in HumanEval.
  • Safety and Security: Meta has implemented robust safety measures and responsible AI practices, although the models do not yet support multimodal capabilities.

Summary

Meta’s release of the Llama 3.1 405B model represents a significant leap in open-source AI development. With its extensive parameter count, expanded context window, and superior benchmark performance, Llama 3.1 positions itself as a formidable competitor to leading proprietary AI models. The open-source availability and community-focused development approach further enhance its appeal, promising a broad impact on the AI landscape.

Mistral’s Large 2: Transformers Design using DALL-E3

Mistral Unveils Mistral Large 2: A New Generation AI Model

Introduction

Mistral AI has announced the release of Mistral Large 2, the latest iteration of their flagship model, boasting significant advancements in multilingual support, code generation, mathematics, and reasoning.

Features

  • Mistral Large 2 Model: Featuring 123 billion parameters, designed for single-node inference with a 128k context window.
  • Multilingual and Code Support: Supports numerous languages and over 80 coding languages.
  • Enhanced Reasoning: Improved accuracy in reasoning tasks and reduced hallucination.

Benefits

  • Superior Performance: Achieves high scores on benchmarks like MMLU, GSM8K, and MATH.
  • Customizability: Available for research and non-commercial use under the Mistral Research License.
  • Community Engagement: Open-source model fostering innovation and collaboration.

Technical Details

  • Training Data: Extensively trained on code and multilingual data, with a focus on improving reasoning and accuracy.
  • Benchmarks: Outperforms previous Mistral models and rivals top AI models in code generation and reasoning.

Summary

Mistral Large 2 sets a new standard for open-source AI models with its extensive capabilities and high performance across various benchmarks. Its release emphasizes Mistral AI’s commitment to advancing AI technology and fostering a collaborative ecosystem.

Other AI News

  • Apple Joins White House in Pledge for AI Safety

Apple has signed the White House’s voluntary commitment to promote the development of safe, secure, and trustworthy artificial intelligence (AI). This initiative, part of a broader effort to regulate the rapidly evolving field of AI, includes guidelines aimed at ensuring that AI systems are developed responsibly and transparently. Apple’s participation aligns it with other major tech companies, such as Google, Meta, Microsoft, and OpenAI, which have also pledged to adhere to these standards. These companies are expected to implement measures that include sharing safety test results, investing in cybersecurity, and reporting their AI systems’ capabilities and limitations.

The commitment underscores the importance of collaboration between the government and the private sector in addressing the ethical and safety challenges posed by AI. It reflects a growing consensus that proactive steps are necessary to mitigate risks associated with AI technologies, such as bias, privacy invasion, and misuse. By signing this agreement, Apple and its peers are contributing to a framework that aims to balance innovation with public trust and safety.

  • Privacy Watchdog Criticizes Elon Musk’s Use of User Data for AI Training

Elon Musk’s company, X (formerly Twitter), faced criticism from privacy advocates after it was revealed that user data had been automatically opted into training the company’s Grok AI without explicit user consent. The change was quietly implemented and discovered by users, prompting a response from privacy watchdogs who expressed surprise and concern over the lack of transparency and user control in this decision. They argue that such practices could undermine user trust and violate privacy norms.

Musk’s AI startup, xAI, developed Grok as part of a broader strategy to integrate advanced AI functionalities into X’s platform. However, the move to use user data for training AI models without prior notification has sparked debate over ethical AI practices and the need for clearer policies to protect user privacy. Critics emphasize the importance of user consent and transparency, warning that the absence of these principles could lead to significant backlash and regulatory scrutiny.

  • OpenAI Launches SearchGPT to Rival Google Search

OpenAI has introduced a new AI-driven search engine named SearchGPT, designed to compete directly with Google Search. The prototype, currently being tested with a small user group, aims to integrate advanced AI capabilities with real-time web information to provide quick and accurate answers with clear sources. This approach is expected to address concerns about accuracy and plagiarism, offering a more transparent search experience. By leveraging its AI expertise, OpenAI hopes to attract users away from Google, potentially impacting Google’s market dominance.

SearchGPT represents a significant step in AI search technology, emphasizing the importance of reliable and timely information. OpenAI’s move into the search market underscores the growing competition in AI technologies, with SearchGPT promising a novel approach to how users interact with and retrieve information online. This development could reshape the landscape of online search, challenging established players and driving innovation in the sector.

  • Google Enhances Gemini Chatbot and Expands Its Reach

Google has significantly improved its Gemini chatbot, unveiling a faster version called Gemini 1.5 Flash and integrating it into more platforms. This new iteration includes advanced features such as deeper logical analysis, interactive coding, and extended conversational capabilities. Gemini is now available in Google Workspace, assisting users in Gmail, Docs, Sheets, Slides, and Meet. This integration aims to boost productivity and provide a seamless AI-powered experience across Google’s suite of services.

Moreover, Google introduced Gemini Advanced, offering access to the Gemini Ultra 1.0 model, the company’s most sophisticated AI. This model supports multimodal capabilities, including voice and image interactions, and functions as a personal tutor, creative collaborator, and data analyst. Available as part of the Google One AI Premium Plan, Gemini Advanced positions Google to better compete in the AI space by enhancing user experience and utility across its applications.

  • ZoomInfo Alum Secures $15M for AI Sales Engineer Startup

Arjun Pillai, a former chief data officer at ZoomInfo, has raised $15 million in Series A funding for his new startup, DocketAI. The funding round was led by Mayfield Fund and Foundation Capital. Pillai, who has a history of successful entrepreneurial ventures, launched DocketAI to create AI-driven virtual sales engineers. These AI sales engineers aim to streamline the sales process by answering technical questions and drafting documents, thereby freeing human sales engineers to focus on more complex and strategic tasks. This approach is designed to increase productivity and efficiency in sales operations.

DocketAI integrates data from over 100 applications used by its clients, learning from the actions of top salespeople to provide accurate and scalable sales solutions. Unlike traditional AI tools, DocketAI doesn’t train on enterprise data but acts as a sophisticated search engine for workplace information. The company, which started selling its product earlier this year, has been rapidly acquiring enterprise clients, including notable names like ZoomInfo and Demandbase. By leveraging AI to handle routine inquiries, DocketAI helps companies improve their win rates and overall sales performance.

  • Meta Urged to Strengthen Policies on AI-Generated Explicit Images

Meta’s Oversight Board has called on the company to refine its policies regarding AI-generated explicit images, following a review of cases involving manipulated images of public figures from the United States and India. The Board found that Meta’s current policies, which categorize such content under the “Bullying and Harassment” section, are insufficient and recommended reclassifying these rules under the “Adult Sexual Exploitation” standards. This change aims to address the broader range of media manipulation techniques available today, especially those enabled by generative AI, and to make the rules more intuitive for users.

The Oversight Board’s recommendations also include using clearer terminology such as “non-consensual” instead of “derogatory sexualized photoshop,” to better describe unwanted sexualized image manipulations. They stressed the importance of not relying solely on media reports to identify non-consensual content, as this could leave many victims unprotected. Additionally, the Board criticized the practice of auto-closing appeals for image-based sexual abuse and urged Meta to improve its review processes to mitigate the harm caused by delayed responses.

  • Researchers Utilize iPhone Scans to Train Home Robots in Simulations

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel method to train home robots using simulations based on iPhone scans. By allowing users to scan their home environments with an iPhone, the data can be uploaded into a simulation where robots can practice tasks in a virtual setting. This approach enables robots to attempt tasks thousands or even millions of times in a short period, significantly reducing the time and cost associated with physical trials. The use of iPhone scans makes the simulation process more accessible and adaptable, improving the robot’s ability to navigate and perform tasks in the dynamic and varied environments typical of homes.

Simulation has become a fundamental part of robot training due to its efficiency and safety. Robots can learn and refine their tasks without the risk of physical damage or the need for extensive resources. This method is particularly useful for non-vacuum home robots, which face challenges in unstructured environments. The scanned data helps create a robust database of different home settings, enhancing the robot’s adaptability when encountering changes such as moved furniture or unexpected obstacles. By leveraging this technology, researchers aim to advance the development of versatile home robots capable of performing a wide range of tasks.

  • Bing Introduces AI-Generated Summaries to Compete with Google

Microsoft has unveiled Bing Generative Search, its response to Google’s AI Overviews, aiming to enhance search results with AI-generated summaries. This new feature, currently available to a small percentage of users, uses a combination of large and small generative AI models to aggregate and summarize information from across the web. This approach is designed to offer users quick, concise answers directly within the search interface, potentially streamlining the search experience and improving efficiency.

The rollout of Bing Generative Search comes as Microsoft seeks to address some of the issues that plagued Google’s AI Overviews, such as inaccurate or potentially harmful advice. By focusing on reliable and accurate information, Microsoft hopes to avoid the pitfalls experienced by its competitors and ensure that the generative AI responses enhance rather than detract from the user experience. Additionally, Microsoft is mindful of the impact on website traffic, striving to maintain click-through rates to original content sources while integrating AI summaries into search results.

  • Venture Capital Investment in Generative AI Startups Continues to Surge

Despite economic uncertainties and previous market slowdowns, venture capital (VC) investment in generative AI startups remains robust. In the first half of 2024, VCs poured billions into this sector, with notable deals such as Cohere’s $450 million funding round at a $5 billion valuation and Perplexity’s $62.7 million raise at a $1.04 billion valuation. This trend indicates sustained confidence in the potential of generative AI to transform various industries, from customer service to creative content generation.

VC firms like Sequoia and Andreessen Horowitz lead the pack in generative AI investments, reflecting a broader trend where AI valuations significantly outpace those in other tech verticals. This influx of capital is driven by the increasing applicability of generative AI technologies across sectors and the anticipated long-term value they promise. Despite some challenges, such as potential AI “hallucinations” and integration complexities, the overall outlook for generative AI investments remains optimistic, with continued growth expected in the coming years.

  • Scientists Warn of ‘Model Collapse’ in AI Training Practices

Researchers are raising alarms about a phenomenon known as “model collapse,” where AI models degrade over time if trained predominantly on data generated by other AI models. This process can lead to AI systems progressively producing less accurate and more homogenized outputs, as they lose the diversity of information necessary for nuanced understanding. The concern is that as AI-generated content becomes more prevalent online, new AI models trained on this data could deteriorate in quality, becoming less effective and potentially forgetting fundamental information.

The research, led by Ilya Shmailov from the University of Oxford and published in Nature, emphasizes the importance of maintaining a significant portion of human-generated data in AI training sets. It warns that reliance on AI-generated content for training could lead to an AI ecosystem where models are unable to provide accurate or meaningful responses. The study suggests that integrating real human interaction data is crucial to prevent this degradation and ensure the continued effectiveness of AI systems.

  • Lakera Secures $20M to Enhance AI Security for Enterprises

Lakera, a Swiss startup specializing in AI security, has raised $20 million in a Series A funding round led by Atomico, with participation from Citi Ventures and Dropbox. The company focuses on protecting generative AI applications from various vulnerabilities, including malicious prompts, AI “sleeper agents,” and AI-targeted worms. Lakera’s flagship product, Lakera Guard, leverages a massive repository of AI-focused cyberattack data, updated with over 100,000 entries daily. This enables the company to provide real-time security solutions that are easy to deploy and do not compromise user experience.

Founded in 2021, Lakera has quickly gained traction, securing significant customers such as Dropbox and a major U.S. bank. The new funding will be used to expand Lakera’s presence in the U.S. and accelerate product development. The company’s innovative approach includes an educational game called Gandalf, which helps identify AI vulnerabilities and has been adopted by over 250,000 users, including major organizations like Microsoft. This strategic funding aims to bolster Lakera’s capabilities in securing AI applications as more enterprises adopt generative AI technologies.

  • Mark Zuckerberg Envisions AI Clones for Content Creators

Mark Zuckerberg has outlined a future where content creators can leverage AI to create digital clones of themselves, significantly amplifying their productivity and presence. This concept is part of Meta’s broader push to integrate AI technologies into its platforms, enabling creators to maintain a consistent output and interact with their audience without the constant need for manual content creation. These AI clones could handle routine tasks, engage with followers, and even generate new content, thus freeing creators to focus on more strategic and creative endeavors.

Zuckerberg’s vision underscores the potential for AI to revolutionize the content creation industry, making it easier for creators to scale their operations and enhance their engagement. By utilizing advanced AI models, these digital replicas can mimic the creator’s style and personality, ensuring that the content remains authentic and relatable. This innovation could transform how creators manage their online presence, offering a sustainable solution to the demands of continuous content production.

  • Elon Musk Announces 2026 Sale Date for Optimus, Tesla’s Humanoid Robot

Tesla CEO Elon Musk has announced that the company’s humanoid robot, Optimus, will be available for commercial sale by 2026. The robot, designed to handle repetitive and hazardous tasks, is already performing basic autonomous functions such as moving batteries in Tesla’s facilities. Initially, Optimus will be produced for internal use at Tesla in 2025, with plans for wider distribution the following year. Musk envisions a significant demand for these robots, estimating a long-term market need of up to 20 billion units globally, driven by both individual and industrial applications.

The announcement underscores Tesla’s strategic shift towards robotics, a sector that has gained momentum due to advancements in AI and automation, particularly in response to pandemic-induced staffing shortages. However, Tesla faces stiff competition from companies like Boston Dynamics and Honda, which are also advancing in the humanoid robotics field. Despite Musk’s ambitious projections, industry experts caution that significant challenges remain in developing AI systems and mechatronics that can perform complex tasks reliably at scale. As the industry evolves, the race to bring practical humanoid robots to market continues to heat up.

  • OpenAI-Backed LegalTech Startup Harvey Raises $100M in Series C Funding

Harvey, a legaltech startup supported by OpenAI, has successfully raised $100 million in a Series C funding round led by GV, Google’s corporate venture arm. This round also saw participation from notable investors such as Kleiner Perkins, Sequoia Capital, Elad Gil, and SV Angel. The new funding brings Harvey’s total raised to $206 million, valuing the company at $1.5 billion. Harvey, co-founded by former litigator Winston Weinberg and AI scientist Gabriel Pereyra, aims to enhance legal workflows by leveraging AI technology. Their suite of tools, powered by a customized version of OpenAI’s GPT-4, helps legal teams extract key information from documents, generate first drafts, and find relevant legal precedents more efficiently.

The capital infusion will be used to expand Harvey’s team, improve its AI models, and extend its reach into new geographies. Despite competition from other legal AI startups like Casetext and Klarity, Harvey has made significant inroads, being used by tens of thousands of lawyers daily at major law firms and consultancies such as Allen & Overy and PwC. The founders emphasized that while their tools can significantly boost productivity, they are designed to be used under the supervision of licensed attorneys to mitigate risks associated with AI-generated content.

  • Meta AI Introduces New Selfie Features and Expands Quest Support

Meta has rolled out new AI-powered features for its platforms, including the “Imagine Me” selfie tool and enhanced support for Quest headsets. The “Imagine Me” feature, powered by Meta’s latest generative AI model, allows users to create images based on their photos and a prompt, such as “Imagine me surfing.” This tool is designed to enhance user creativity and engagement by generating personalized images from simple text inputs.

In addition to the selfie feature, Meta AI is being integrated into Quest 3 and Quest Pro headsets, offering users a virtual assistant capable of answering questions and providing real-time information using Bing’s search capabilities. The AI assistant, which is currently in experimental mode and available only in the U.S. and Canada, can recognize objects and provide contextual answers, such as translating text or identifying items in the user’s environment. This marks a significant step in merging AI with augmented reality, enhancing the functionality and interactivity of Meta’s VR and AR devices.

  • Former Tesla Humanoid Head Launches Robotics Startup Mytra with $78M Funding

Chris Walti, the former head of Tesla’s Optimus humanoid robot project, has launched a new robotics startup called Mytra. The company, focused on warehouse automation, has raised $78 million in funding across three rounds, with investors including Eclipse Ventures and Greenoaks. Mytra’s robots are designed to handle the high-speed, complex environments of modern warehouses, using advanced AI to navigate and avoid collisions. The startup aims to revolutionize material handling, making operations more efficient and cost-effective.

Walti’s experience at Tesla, where he scaled autonomous robots, has driven his vision for Mytra. The company’s solution targets practical issues in material flow, addressing a significant need in the logistics industry. With high-profile pilots already in place, including one with grocery giant Albertsons, Mytra is set to make a substantial impact on warehouse operations globally.

  • Adobe Enhances Illustrator and Photoshop with New Firefly AI Tools

Adobe has introduced new AI-powered features to its Illustrator and Photoshop software through its Firefly AI tools. These updates, released on July 23, 2024, aim to streamline the creative process for graphic designers. One of the notable additions is the “Generative Shape Fill” tool in Illustrator, which allows users to add textures and details to shapes using text prompts or style references. This tool is powered by an updated version of Adobe’s Firefly Vector model, providing creators with more control and customization options in their designs.

In Photoshop, new Firefly tools enable users to generate images and textures quickly by simply describing what they want in short text prompts. This significantly reduces the time needed to create complex visuals. Adobe’s approach includes offering Creative Cloud customers a limited number of generative credits each month at no additional cost, integrating these advanced AI features seamlessly into existing workflows. This initiative aims to make powerful AI tools accessible while addressing concerns about AI-generated content by compensating contributors to Adobe Stock.

  • Vayu Robotics Shifts from LiDAR to Foundation Models for Enhanced Delivery Robots

Anand Gopalan, former CEO of Velodyne, has led his new company, Vayu Robotics, to transition from relying on LiDAR technology to using foundation models for its autonomous delivery robots. Founded in 2022, Vayu Robotics aims to enhance the affordability and scalability of delivery solutions by leveraging advanced AI models. Gopalan’s shift from LiDAR, traditionally essential for autonomous navigation, to foundation models reflects a strategic move to capitalize on the robust and versatile capabilities of large-scale neural networks. These models promise improved performance in complex environments, addressing some limitations of LiDAR, such as high costs and environmental sensitivity.

This strategic change highlights a significant innovation within the robotics field. Foundation models can process vast amounts of data, allowing delivery robots to better understand and navigate their surroundings. This move aims to enhance the reliability and efficiency of autonomous deliveries, potentially transforming the logistics sector. By integrating these advanced AI systems, Vayu Robotics is poised to overcome some of the key challenges faced by traditional LiDAR-based systems, marking a notable shift in the development and deployment of autonomous robotic technologies.

  • Level AI Enhances Contact Center Efficiency with Advanced Algorithms

Level AI, founded by Ashish Nagar in 2019, leverages AI to optimize productivity in contact centers. The platform provides tools to automate customer service tasks, score agents on performance metrics, offer real-time hints during interactions, and assess customer sentiment. These capabilities aim to improve service quality and management. Despite challenges such as data privacy and integration concerns, Level AI has secured $39.4 million in Series C funding, aiming to expand its user base and workforce.

Level AI’s innovative approach has attracted clients like Affirm, Penske, and Carta. The company plans to use the new funding to broaden its market reach and enhance its 135-member team. Nagar projects the company could achieve $50 million in annual recurring revenue within two years. This growth reflects the increasing adoption of AI in the contact center industry, addressing key operational pain points while navigating concerns about AI integration and job displacement.

  • UK School Reprimanded for Illegal Use of Facial Recognition Technology

Chelmer Valley High School in Chelmsford, Essex, has been formally reprimanded by the UK’s Information Commissioner’s Office (ICO) for unlawfully using facial recognition technology without obtaining explicit opt-in consent from students. The school implemented this technology in March 2023 for cashless lunch payments, replacing the fingerprint system used since 2016. The ICO’s investigation revealed that the school did not conduct a necessary Data Protection Impact Assessment (DPIA) and failed to secure clear affirmative consent from students, a violation of the UK’s General Data Protection Regulation (GDPR).

The ICO emphasized the importance of protecting children’s biometric data, highlighting that the school’s opt-out approach contradicted GDPR requirements. Privacy advocacy groups, such as Big Brother Watch, expressed concerns about the misuse of biometric data and its implications for student privacy. While the ICO did not impose a fine, it issued a public reprimand to ensure better compliance with data protection laws in the future.

  • DeepMind’s Breakthrough in LLM Interpretability with Sparse Autoencoders

Google DeepMind has advanced the interpretability of large language models (LLMs) by utilizing sparse autoencoders (SAEs) enhanced with JumpReLU activation. This novel approach addresses the complexity and opacity of LLMs by activating only a minimal subset of neurons during encoding, which helps simplify the understanding of how these models process and generate information. The introduction of SAEs could significantly improve transparency and functionality, making LLMs more comprehensible and reliable for various applications.

The research highlights the potential of SAEs to enhance AI interpretability, a critical aspect for ensuring ethical and effective AI deployment. By shedding light on the internal mechanisms of LLMs, DeepMind’s work aims to build trust and foster greater adoption of these models in real-world scenarios. This development is expected to contribute to more robust AI systems that can be better understood and controlled by researchers and practitioners, addressing some of the major concerns associated with AI technology.

  • Microsoft Introduces Serverless Fine-Tuning for Phi-3 Small Language Model

Microsoft has unveiled serverless fine-tuning for its Phi-3 small language model, enabling developers to customize AI models without managing server infrastructure. The Phi-3 models, including Phi-3-mini, Phi-3-small, and Phi-3-medium, are designed to offer high performance and cost-effectiveness. This new feature allows for efficient fine-tuning to improve model performance in specific tasks, making it easier for developers to integrate AI solutions in various applications, from cloud to edge computing scenarios.

The Phi-3 models, available on platforms like Microsoft Azure AI Model Catalog and Hugging Face, are particularly suitable for applications requiring local execution, such as mobile devices and environments with limited connectivity. These models offer flexibility in deployment, minimizing latency and enhancing privacy by keeping data processing on-device. Microsoft’s approach emphasizes the use of high-quality, curated training data to ensure the models’ accuracy and reliability, addressing the challenges of safety and ethical AI usage.

  • Runway Faces Backlash for Unauthorized Use of YouTube Videos in AI Training

Runway, an AI video startup, is facing significant backlash after reports emerged that it used thousands of YouTube videos and pirated films to train its AI models without obtaining permission. The controversy, highlighted by a report from 404 Media, revealed that Runway’s training data included content from major entertainment companies like Disney, Pixar, and Netflix, as well as popular YouTube creators such as Marques Brownlee and Casey Neistat. This unauthorized use of data has raised serious ethical and legal concerns, sparking debates about the practices of AI training across the industry.

The incident has not only put Runway under scrutiny but also brought attention to similar practices by other tech giants like OpenAI, Apple, Anthropic, and Nvidia, who have also been linked to using YouTube content for AI training without consent. The use of such data violates YouTube’s policies, as emphasized by YouTube CEO Neal Mohan. This situation underscores the need for clearer regulations and ethical guidelines in AI development to protect content creators’ rights and ensure responsible AI practices.

About The Author

Bogdan Iancu

Bogdan Iancu is a seasoned entrepreneur and strategic leader with over 25 years of experience in diverse industrial and commercial fields. His passion for AI, Machine Learning, and Generative AI is underpinned by a deep understanding of advanced calculus, enabling him to leverage these technologies to drive innovation and growth. As a Non-Executive Director, Bogdan brings a wealth of experience and a unique perspective to the boardroom, contributing to robust strategic decisions. With a proven track record of assisting clients worldwide, Bogdan is committed to harnessing the power of AI to transform businesses and create sustainable growth in the digital age.