Seeing strength in numbers, Apple made a strategic move in the competitive AI market by releasing eight small AI models. The compact tools, collectively called OpenELM, are designed to run on mobile and standalone devices, making them ideal for smartphones.
Published in the open source artificial intelligence community. Hugging FaceModels are offered in versions with 270 million, 450 million, 1.1 billion and 3 billion parameters. Users can also download Apple’s OpenELM in pre-trained or customized versions.
Pre-trained models provide a foundation on which users can customize and build. Instruction-tuned models are already programmed to respond to instructions, making them more suitable for communicating and interacting with end users.
While Apple hasn’t offered specific use cases for these models, they can be used to run assistants that can analyze email and text messages or provide intelligent suggestions based on data. This is a similar approach to accepted by Googlewhich applied its Gemini AI model to the Pixel line of smartphones.
The models were trained on publicly available datasets, and Apple shares both the CoreNet code (the library used to train OpenELM) and the “recipes” for its models. In other words, users can check how Apple created them.
Apple’s launch will take place shortly after Microsoft Announces Phi-3, a family of small language models capable of running locally. Phi-3 Mini, a 3.8 billion parameter model trained on 3.3 trillion tokens, is still capable of handling 128 thousand context tokens, making it comparable to GPT-4 and superior to Llama-3 and Mistral Large in token capacity .
Being open source and lightweight, Phi-3 Mini has the potential to replace traditional assistants like Apple Siri or Google Gemini for some tasks, and Microsoft has already tested Phi-3 on the iPhone and reported satisfactory results and fast token generation.
While Apple has yet to integrate these new AI modeling capabilities into its consumer devices, the upcoming iOS 18 update is rumored to be includes new artificial intelligence features that use on-device processing to ensure user privacy.
Apple’s hardware has an advantage in using local artificial intelligence because it combines the device’s RAM with the GPU’s video memory (or VRAM). This means that a Mac with 32GB of RAM (a typical PC configuration) can use that RAM in the same way as GPU video memory to run AI models. For comparison, devices Window they are limited by device RAM and GPU video memory separately. Often, users need to purchase a powerful 32GB GPU to increase RAM and run AI models.
However, Apple lags behind Windows/Linux in AI development. Most artificial intelligence applications are based on hardware designed and manufactured by Nvidia, which Apple abandoned in favor of supporting its own chips. This means that there is relatively little AI development at Apple, and as a result, using AI in Apple products requires multiple layers of translation or other complex procedures.
We know that you have been waiting for this news. Thank you for being so…
In a recent interview Hollywood Reporter I sued Emma Stone If he corrects the fan…
In the month of October, Tokyo carried out one of the measures that will be…
He Ministry of Health warned this Friday that Alcohol is an important risk factor for…
DOW JONES futures fell 0.38% to 43,586.20 and S&P 500 futures fell 0.50% to 5,919.…
Friday, which lasts a month. Black Friday, which is celebrated every year on the last…