Поиск по этому блогу

Search1

123

среда, 20 декабря 2023 г.

A Tale of a Close Partnership

Can't read or see images? View this email in a browser
 

https://stratus.campaign-image.in/images/83238000017650001_zc_v1_1699009295190_daily_xo_(1).jpg


TAUSIF ALAM & AMIT RAJA NAIK

Wednesday, Dec 20, 2023 | Was this email forwarded to you? Sign up here

___________________________________________________________


While the AI world faces a GPUs crisis, AMD has figured out a way to capitalise on the situation. The chip company, in September, tied up with Lamini, which helps startups build and run generative AI products by fine-tuning existing foundational models on AMD GPUs.


During AMD's Advancing AI event, Lamini co-founder Sharon Zhou highlighted their continuous use of AMD hardware and software. This revelation came two months ago, when Lamini made a significant disclosure that it had been exclusively utilising AMD GPUs over the past year.


https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExdGZxY203c2twNW85dDg0dmdhYzZyNWowMWhsZTdkenNkanB3dWM2cSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/UyWXNumwNvXmE/giphy.gif  

Lamini, in collaboration with AMD's ROCm open software ecosystem, has unveiled the LLM Superstation, a finely tuned supercomputer with 128 AMD Instinct GPUs. This allows customers to train large LLMs, including Meta AI's Llama 2, making their AI models proprietary. 


Lamini incorporates sophisticated optimisations such as PEFT (LoRA), RLHF, and toolformer, enabling data isolation across 4,266x models on a single server, accelerating model switching, compressing models, and integrating LLMs with enterprise APIs seamlessly. 


The platform facilitates hosting 200 billion parameter models on a singular server, handling 10,000 fine-tuned language models, managing 12,800 concurrent requests, and processing over 3.5 million queries per day on a single node.


Lamini's solution also offers a cost advantage, claiming to be 10 times less expensive than AWS, without the lengthy wait time associated with NVIDIA H100s. AMD's Instinct MI250X, used as the foundation for Lamini, reportedly runs larger models than NVIDIA's A100s. 


Lamini has been deployed in AMD's internal Kubernetes cluster, emphasising the company's commitment to creating models trained on AMD's code base across various components for specific developer tasks.


Read the full story here.




Busting Benchmark Myths 


There’s a stiff competition among companies to excel on different benchmarks and especially, the MMLU (massive multitask language understanding), which is designed to measure knowledge by evaluating models exclusively in zero-shot and few-shot settings. However, this approach has been massively criticised as the marketing gimmick to showcase performance on the paper. However, the real evaluation of the model only happens when it goes in the hands of the people and its real-world use cases are discovered. 


That's where Hugging Face’s Open LLM leaderboard comes into picture which gives a complete picture listing different models and their compute costs. In an exclusive interview with AIM, Philipp Schmid, a Technical Lead at Hugging Face, busts many myths about LLM benchmarks and gives new insights.


Read the full story here.




Apple’s Health Game


Apple recently rolled out iOS 17.2, enabling spatial video recording for iPhone 15 Pro and iPhone 15 Pro Max users. The upcoming Apple Vision Pro, expected in early 2024, will bring these videos to life. Reports suggest Apple's 2024 focus will shift heavily to wearables, with minor iPhone upgrades. The healthcare sector takes centre stage, featuring in the launch of advanced AirPods and watches. 


The latter will include enhanced health detection capabilities, covering issues like hypertension and apnea. AirPods may incorporate brain wave-detecting sensors for measuring various physiological parameters, representing Apple's deep dive into health-related technology advancements.


Read the full story here.




A Celestial Showdown


https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExajZhd2E0NGxpNG94Mmd6d2dkZjgxcmV5dXVwZGRkdzdxbXZuem43bCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/b2Wem0WW81PzeUuYn7/giphy.gif 

In the competitive realm of satellite internet, Amazon's Project Kuiper aims to challenge SpaceX's Starlink. Jeff Bezos expects multiple winners in space, which is like the diverse success of internet entities. Recently, Amazon achieved a breakthrough in Kuiper, testing optical mesh networks in low-Earth orbit, achieving 100 Gbps links between prototype satellites using infrared lasers. 


Kuiper's optical inter-satellite links (OISL) enable simultaneous connections with multiple spacecraft, forming a mesh network that moves data 30% faster than terrestrial fibre optic cables. Despite progress, Kuiper trails behind Starlink, which boasts over 2 million customers and significant government contracts.


Read the full story here.

 

   

DOWNLOAD OUR MOBILE APP

Stay Connected

info@analyticsindiamag.com

© 2023 Analytics India Magazine

   
Facebook
Twitter
LinkedIn
Youtube
Instagram
   
 
Analytics India Magazine | 280, 2nd floor, 5th Main, 15 A cross, Sector 6, HSR layout Bengaluru, Karnataka 560102

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.