Nvidia Crushes GPT-4, Adobe Dives Into AI, and Mistral AI for Laptops & Phones
PLUS HOT AI Tools & Tutorials
Hello and welcome to our weekly roundup!
Yeah, I know we're on an AI newsletter here, but still, what do you think of SpaceX's latest achievement? I've never been a big fan of space technology, but this Mechazilla video really excited me. It was cool.
Anyway, let's get to our topic. This week, we had a big Adobe event, a surprise new winner on the LLM leaderboards, and new features for Amazon’s AI Ads generator.
So here we go.
This Creators’ AI Edition:
Featured Materials 🎟️
News of the week 🌍
Useful tools ⚒️
Weekly Guides 📕
AI Meme of the Week 🤡
AI Tweet of the Week 🐦
(Bonus) Materials 🎁
Featured Material 🎟️
Adobe MAX 2024 Conference
Honestly, I don't even know where to start.
Adobe held its annual MAX Conference 2024 event this week, showcasing many product updates. The conference's idea was built around AI features and ease of content creation. The company unveiled over 100 new tools for its Creative Cloud platform, many of which utilize Adobe's Firefly AI.
One of the company's most popular products, Adobe Photoshop, seems to have gotten the most updates. So, let's summarize the important ones:
Distraction Removal: this feature allows you to automatically remove unwanted elements from images and quickly replace the background with generated content.
Generative Fill & Expand: these Firefly-based tools allow you to fill and expand images with AI. They also generate images with different levels of detail, lighting, composition, and color.
In addition to the above, there is a Generate Similar for creating image alternatives if you don't like the first one.
Generative Workspace: a particular brainstorming function that supports the simultaneous processing of multiple prompts. With its help, you can generate different sets of images in parallel.
Keep your mailbox updated with key knowledge & news from the AI industry
Adobe is also working on experimental features for different creative personalities. For example, Project Super Sonic, which uses different sounds, like the user's voice, to generate audio tracks.
Here's how the company unveiled this feature at the conference:
Another unusual tool (which is sure to find its users in so many industries) is called Project Turnable. It can turn 2D objects into 3D objects in a couple of clicks.
Here's an example:
And the company doesn't plan to stop there. Adobe will continue to develop new models and enhance existing ones. It intends to help 30M students and teachers develop AI literacy, content creation, and digital marketing skills with Adobe Express.
AI features will soon appear in other products, besides Photoshop and Premiere. More details about the platforms you use can be found at Adobe's website.
News Of The Week 🌍
Nvidia Dropped New AI Model That Crushes OpenAI’s GPT-4
Nvidia has unveiled a new AI model that outperforms the latest AIs, including GPT-4o and Claude-3. The new model, dubbed Llama-3.1-Nemotron-70B-Instruct, is now the top model on Chatbot Arena from lmarena.AI. It is essentially a modified version of the open-source Llama-3.1-70B-Instruct from Meta. The “Nemotron” part of the model name reflects Nvidia's contribution to the final result.
The company’s new offering scores top marks in key benchmarks, including 85.0 in the Arena Hard test, 57.6 in AlpacaEval 2 LC, and 8.98 in GPT-4-Turbo MT-Bench. These results are superior to the GPT-4o and Claude 3.5.
In addition to refining Llama 3.1, Nvidia used Reinforcement Learning from Human Feedback (RLHF) to create a new model. This allowed the AI to learn from human preferences, leading to more natural and contextually relevant responses. Early users suggest that the model could provide businesses a more efficient and cost-effective alternative to the most advanced models on the market.
The company offers free hosted output through its build.nvidia.com platform, complete with an OpenAI-compatible API.
You also can find more information on Hugging Face.
Nvidia had a very good week in general. The company's stock hit a record high on Monday, closing at $138.07. Though, to be fair, that's not due to the new model, but to the fact that Wall Street is awaiting earnings reports from Microsoft, Meta, Google, and Amazon on their AI infrastructure spending.
Mistral Releases AI Models Optimized for Laptops and Phones
French startup Mistral has released its first generative AI models targeting laptops and smartphones. The family, dubbed “Les Ministraux,” is suitable for various applications, including text generation and collaboration on complex tasks. The developer now offers two models: Ministral 3B and Ministral 8B. Both AIs have a contextual window of 128,000 tokens (roughly equivalent to a small book of 50 pages).
Ministral 8B is already available for download, but only for research purposes. The startup requires developers and companies interested in self-deploying one of the models to contact it for a commercial license.
Sharing is caring! Refer someone who recently started a learning journey in AI. Make them more productive and earn rewards!
Amazon's AI Generator Tool Now Creates Audio Ads
Amazon Ads has expanded its suite of creative tools to include Audio Generator, a new feature that allows advertisers to create audio ads using generative AI. Advertisers can link Audio Generator to their product listing, and it will create a 30-second audio ad based on the product description. The ad can be customized with additional inputs and parameters if needed.
Amazon has chosen a rather interesting strategy for developing AI. Instead of entering new niches, the company uses different models to improve existing products. How promising this approach is, only time will tell.
Meta’s AI Chief Predicted When We'd Get Human-Level AI
Meta's chief AI scientist, Yann LeCun, offers some rationalization. According to him, modern models (like ChatGPT and Meta AI) are pretty far from what we understand by awareness and perception of the world around us. But that doesn't mean we're not moving in the right direction. LeCun suggested that to solve more complex problems, we must create 3D models capable of perceiving the world around us based on a new type of AI architecture: world models.
Here's how he explained it:
A world model is your mental model of how the world behaves. You can imagine a sequence of actions you might take, and your world model will allow you to predict what the effect of the sequence of action will be on the world.
However, even with such a platform, we will get human-level AI in 10 years at the earliest. Interestingly, Sam Altman thinks otherwise and believes that real artificial intelligence will be available closer to the end of this decade.
Sam Altman Rebrands Its Worldcoin Crypto Project
Speaking of the head of OpenAI. Worldcoin, Sam Altman's crypto project co-founded by “proof of personhood” that scans people's eyeballs, announced Thursday that it has dropped the word “coin” from its name and is now called “World.” The startup behind World, Tools for Humanity, also unveiled its next generation of iris-scanning “Orbs” and other tools at a live event in San Francisco.
For those who missed it all, The World is based on the idea that advanced AI systems—like the one OpenAI is trying to build—will one day make it impossible to tell if you're communicating with a real person. So, the company wants to use blockchain to solve that problem. It promises to make the benefits of AI available to everyone, potentially redistributing AI-created wealth to people through its Worldcoins.
Here's the video from its latest event:
That said, it should be noted that The World has a rather murky future. The Kenyan government previously investigated the startup, resulting in a temporary suspension of the project. At the same time, the EU authorities continue to study data on the company's activities.
Useful Tools ⚒️
Prisma Optimize – AI-driven query analysis
CodeAnt AI – AI code reviews that can cut code review time & bugs in half
Reiden AI – Your brilliant keyboard shortcut copilot
Prismy – GitHub-native, AI localization for dev & product teams
Convo – Qualitative user research powered by AI
Convo aims to streamline the entire process of conducting user research, making it faster and more efficient for product owners and researchers. It utilizes AI to conduct interviews and surveys, allowing users to gather in-depth insights at scale. With this tool, you can fully automate the audience research process and reduce the time typically spent collecting and analyzing data. I’d say you should give it a try.
Share this post with friends, especially those interested in AI stories!
Weekly Guides 📕
I Made a YouTube Shorts Automation Channel Using Only AI in 8 Minutes
Cursor Tutorial for Beginners (AI Code Editor)
How To Learn Technical Things Fast (with the help of AI)
Build Your Own Local DEEPFAKE Studio with FREE FaceFusion 3 AI
How to Use AI to Build Your Company’s Collective Intelligence
AI Meme Of The Week 🤡
AI Tweet Of The Week
Well, I don't even know how to feel about that.
(Bonus) Materials 🏆
Duolingo CEO Luis von Ahn wants you addicted to learning
Machines of Loving Grace - essay by Anthropic’s CEO
Filmmakers Are Worried About AI. Big Tech Wants Them to See ‘What's Possible’
National Archives Want to Use Google’s Gemini to Increase Productivity
Elon Musk wants to dominate robotaxis—first he needs to catch up to Waymo - an interview with Wyamo co-CEO
Share this edition with your friends!
Very insightful post
Amazing read about relevant AI topics