Google DeepMind introduced a new artificial intelligence training method designed to reduce computing costs and energy consumption, potentially impacting the economics of AI development and its applications in online commerce and global customer support.
The new technique, called JEST (joint example selection), reportedly delivers a 13-fold increase in performance and a tenfold improvement in power efficiency compared to existing methods. As discussions continue about the environmental impact and expenses associated with AI data centers, this innovation may help lower barriers to entry in the AI industry and accelerate advancements, particularly in eCommerce applications and multilingual support. Experts emphasize the impact of AI training advancements.
“New training methods for large language models (LLMs) are essential due to the rapidly evolving nature of data and the increasing demand for models that can adapt to new information and contexts,” Dmytro Shevchenko, a data scientist from Aimprosoft.com, told PYMNTS.
AI training methods have evolved since the inception of machine learning. Traditional approaches often rely on supervised learning, where models are trained on labeled datasets. More recent developments include unsupervised learning, which identifies patterns in unlabeled data, and reinforcement learning, where models learn through trial and error. The field has seen a shift toward more efficient and specialized training techniques as the complexity and size of AI models have grown.
The JEST method differs from traditional AI model training techniques, focusing on entire batches of data rather than individual data points. First, it creates a smaller AI model to grade data quality from high-quality sources, ranking the batches by quality. This grading is then compared to a larger, lower-quality set. The small JEST model determines the batches most suitable for training, and a large model is then trained from these findings.
The need for improved training methods extends beyond general adaptability. Language I/O CEO and founder Heather Morgan Shoemaker told PYMNTS that new methods are crucial for language models to respond accurately to questions about niche or sensitive domains.
“It could be a sensitive domain related to healthcare or finance that deals with very sensitive information that is intentionally not intended for consumption by LLM training algorithms,” Shoemaker said.
Several emerging approaches in AI training could impact online commerce. One such method is reinforcement learning from human feedback (RLHF), which involves fine-tuning models based on user interactions. This approach can improve recommendation systems, leading to more personalized and relevant product offerings.
Another technique is parameter-efficient fine-tuning (PEFT), which efficiently adapts AI models to specific tasks or domains. This method could be useful for online retailers looking to optimize their algorithms during peak sales periods.
A frequently overlooked aspect of AI development is ensuring language models can provide accurate responses across an organization’s full range of supported languages. Many companies mistakenly assume their AI systems can effectively translate content, particularly specialized terminology, between languages. However, this assumption often leads to inaccuracies in multilingual communication, especially when dealing with industry-specific jargon or complex concepts.
To address this issue, some organizations are developing new approaches to multilingual AI training. Language I/O, for instance, has created a retrieval augmented generation (RAG) process influenced by a multilingual approach.
“We don’t rely on a general LLM to inaccurately translate to and from a single base language,” Shoemaker said. “We equip it to respond natively, in the requestor’s language. This approach can enhance the accuracy of multilingual support in eCommerce settings.”
New AI improvements could change online shopping by offering better product suggestions, improved customer service and smoother business operations. AI that understands more languages could help companies grow worldwide and make customers happier. Faster AI training might lead to quicker setup of AI for various business tasks, like better inventory management and improved customer service chatbots. With more accurate AI that speaks many languages, businesses could enter new markets more efficiently and offer local services without human translators.
“Improved training approaches can enhance online commerce by enabling more accurate, context-aware multilingual support,” Shoemaker said. “This leads to better customer experiences, reduced language barriers and potentially increased revenue. For example, in gaming or technical support scenarios, precise translation of specialized terms is crucial for effective communication.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.