California Passes Law Forcing Digital Game Stores to Admit You Don’t Own Digital Games
California has passed a new law ‘AB 2426’ that aims to provide transparency around the digital ownership of games purchased through online stores. The law requires game retailers to inform consumers about the limitations of their purchases and admit that customers are merely licensing content instead of outright buying it. The law was inspired by actions taken by major gaming publishers like Ubisoft and Sony.
California Assemblymember Jacqui Irwin introduced AB 2426 law after seeing cases where consumers lost access to content they had paid for digitally. Ubisoft pulled access to the online-only game The Crew without providing an offline option. Additionally, Sony initially decided to remove the Discovery content purchased from the PlayStation store before reversing the decision. These events highlighted that digital purchases don’t guarantee permanent access and ownership.
Key Points of California AB 2426 Law
1. Transparency in Licensing
Stores cannot use terms like “buy” or “purchase” without disclosing that customers are licensed and do not own the content.
2. Disclaimers
Websites must provide clear notice about the restrictions of digital licenses upfront.
3. Offline Exception
The law won’t apply to games/content that offer permanent offline downloads.
4. Scope
All major storefronts on platforms like Xbox, PlayStation, Steam, and Uplay must comply with the new rules.
5. Enforcement
Fines can be imposed on retailers for not following the disclosure guidelines.
Positive Impact of California AB 2426 Law
While the law doesn’t bar revoking licenses, it aims to ensure customers are aware they are they are paying for access, not ownership of digital movies, games, books etc. This could influence purchasing decisions if buyers understand ownership uncertainty. The changes also set an example for other regions to bring transparency to digital licenses. Critics say it is a step towards better consumer protection for digital media licensing practices.
Concluding Thoughts
Only time will tell how effective the new guidelines are in practice. However, California AB 2426 Law is a step towards protecting consumers and addressing the gaps between the digital and physical media ownership models. Increased awareness may lead to more pro-consumer policies from platforms in the future.
Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!
Cohere AI Drops Command A, The AI That’s Smarter, Faster and More Affordable
In this fast-moving world of AI today, a powerful AI system needs tons of expensive computer equipment to run properly. Companies have to spend a fortune on hardware just to keep these advanced AI systems running. So, they are always looking for technology that works great without breaking the bank. They need AI that can do impressive things with minimal computing needs. This balance is tricky to get right. But what if there is an AI model that is just as smart and fast but needs way less computing power? That’s exactly what Cohere AI has accomplished with its newest model, Command A.
Command A is the newest and most impressive AI model by Cohere AI. It is super smart, really fast, and more secure than earlier versions, like Command R and Command R+. What makes it special is that it works similar to or even better than famous AI models like GPT-4o and DeepSeek-V3 but doesn’t need nearly as much computing power. This gives businesses powerful AI without the huge electric bills and expensive computer equipment.
Key Features of Command A for Enterprises
This model is designed with businesses in mind. It has several features that make it perfect for companies:
1. Command A’s Chat Capabilities
Out of the box, Command A works as a conversational AI with interactive behavior. This setup is perfect for chatbots and other dialogue applications. The model takes text inputs and creates text outputs using an optimized architecture. It has two safety modes: contextual mode allows wider-ranging interactions while maintaining core protections, and strict mode avoids all sensitive topics.
2. 256k Context Window
Under the hood, it has some impressive specs. It has 111 billion parameters and can handle really long texts – up to 256,000 characters at once. Most competing AIs can only handle half that amount.
3. Advanced RAG Capabilities
Command A comes with “retrieval-augmented generation” (RAG). It can look up information and include references for its answers. People who tested found it better than GPT-4o at this task. Its answers were smoother, more accurate, and more useful.
4. Multilingual Excellence
Global companies need AI that works in many languages. Command A supports 23 languages spoken by most of the world’s population. It consistently answers in any of the 23 languages you ask for. In tests, people preferred it over DeepSeek-V3 across most languages for business tasks.
5. Enhanced Code Generation Capabilities
Command A is much better at coding tasks than previous models, outperforming similar-sized models on business-relevant tasks like SQL generation and code translation. Users can ask for code snippets, explanations, or rewrites and get better results by using certain settings for code-related requests.
6. Enterprise-Grade Security
Command A has strong security features to protect sensitive business information. It can also connect with other business tools and apps, making it a versatile addition to existing systems.
7. Agentic Tool Use
The real magic happens when Command A powers AI agents within a company. It works seamlessly with North, Cohere’s platform for secure AI agents. This lets businesses build custom AI helpers that can work inside their secure systems, connecting to customer databases, inventory systems, and search tools.
How Well Command A Performs
When tested side-by-side with the biggest names in AI, like GPT-4o and DeepSeek-V3, Command A holds its own and often comes out on top. It performed better on business tasks, science problems, and computer coding challenges.
The model matches or beats the bigger and slower AI models while working much more efficiently.
Command A processes information up to 156 tokens per second – that’s 1.75 times faster than GPT-4o and 2.4 times faster than DeepSeek-V3.
It only needs two GPUs to run, while other AIs might need up to 32!
Moreover, this tool does great on standard tests for following instructions, working with other tools, and acting as a helpful assistant.
How to Get Started With Command A
Command A is available right now through several channels. You can try it chat in the Conhere AI’s playground here. You can also try it out through the Hugging Face Space demo here. Soon, it will be available through major cloud providers. Companies that want to install it on their own servers can contact Cohere’s sales team.
Command A Pricing Structure
Cohere AI has set competitive prices for using Command A:
Input tokens: $2.50 per million
Output tokens: $10.00 per million
This pricing lets businesses predict costs based on how much they’ll use the system, making budget planning easier.
The Command A Advantage
Cohere AI worked hard to make Command A super efficient. They wanted it to be powerful but not power-hungry. The result? An AI that gives answers much faster than its competitors. For businesses thinking about installing Command A on their own computers instead of using it through the internet, they can save up to 50% on costs compared to paying for each use. What does this mean in real life? Businesses using Command A can:
Get answers for customers more quickly
Spend less money on fancy computers
Grow their AI use without huge cost increases
Save money overall
Wrapping Up
As more businesses bring AI into their daily operations, tools like Command A will become more important. In a crowded AI market, its ability to deliver great results with minimal resources addresses one of the biggest challenges in business AI adoption.
By putting efficiency first without sacrificing performance, Cohere AI has created a solution that fits perfectly with what modern businesses actually need. For sure, this practical tool can help businesses stay competitive in our AI-powered world.
Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!
Google Launches Gemma 3, A Powerful Yet Lightweight Family of AI Models
Google has just launched the latest addition to the Gemma family of generative AI models, Gemma 3. It is a collection of lightweight, super-smart AI models based on Gemini 2.0. With a remarkable 100 million downloads within its first year and an impressive community that has crafted over 60,000 variants, Gemma has established itself as a cornerstone in the realm of AI development. Gemma 3 is specially designed to run directly on your devices, including phones, laptops, and desktop computers. This means you don’t need expensive cloud servers to use powerful AI models.
These models comes in four sizes (1B, 4B, 12B, and 27B) and five precision levels, from full 32-bit down to 4-bit. Bigger models with higher precision generally work better but need more computing power and memory. Smaller models with lower precision use fewer resources but might not be quite as capable. You can pick the one that works best for your device and what you want to do.
The memory needed varies a lot depending on which model you choose. The smallest version (Gemma 3 1B in 4-bit precision) needs only about 861 MB of memory – less than a typical smartphone has! The largest version (Gemma 3 27B in full 32-bit precision) needs about 108 GB – that’s like needing a high-end server.
The Gemma models work better than much bigger models like Llama-405B, DeepSeek-V3, and o3-mini. This means these can run on just one GPU or TPU, making good AI cheaper and more accessible for everyone.
2. Multimodal Capabilities
The models (except the smallest 1B size) can understand both pictures and text. This lets apps do cool things like recognize objects in photos, read text from images, and answer questions about pictures.
3. Expanded Context Window
With a 128k-token context window, Gemma 3 can remember and understand lots of information at once. This is 16 times bigger than older Gemma models! You could feed it several multi-page articles, larger single documents, or hundreds of images in a single prompt.
4. Multilingual Support
The models can speak over 35 languages right out of the box and has been trained on more than 140 languages in total. This lets users build apps that can talk to users in their own language, which opens up their apps to many more people.
5. Function Calling Support
Gemma 3 supports “function calling,” which means it can trigger other programs to do things. This facilitates the automation of complex tasks, enhancing the overall functionality and utility of applications built with it.
6. Quantization Support
The models come in “quantized” versions that use less memory and computing power while still being accurate. These versions range from full 32-bit precision down to tiny 4-bit versions, so developers can choose what works best for their needs.
7. Easy Integration with Existing Tools
It plays nicely with lots of popular development tools like Hugging Face Transformers, Ollama, JAX, Keras, PyTorch, Google AI Edge, UnSloth, vLLM, and Gemma.cpp.
8. Easy to Customize
It comes with recipes for fine-tuning and running it efficiently. Developers can train and adapt the model using platforms like Google Colab, Vertex AI, or even a gaming GPU.
9. Works Great on NVIDIA GPUs
NVIDIA has specially optimized these models to work well on all their GPUs, from the small Jetson Nano to their newest Blackwell chips.
How Gemma 3 Compares to Other AI Models
This family has scored impressively on AI benchmarks. The 27B version scored 1338 on the Chatbot Arena Elo leaderboard, putting it in the same league as much bigger models. What’s really amazing is that while some competing models need up to 32 huge NVIDIA H100 GPUs (which cost thousands of dollars each), the 27B variant needs just one GPU. That’s like getting sports car performance for the price of a compact car!
Real-World Uses for Gemma 3
1. Smart Apps on Your Phone
Gemma 3’s efficiency makes it perfect for creating smart apps that run directly on your phone. Developers can build AI assistants, language translators, content creators, and image analyzers that work quickly without needing to connect to the cloud all the time.
2. Edge Computing
For Internet of Things (IoT) devices and edge computing, it lets AI processing happen right where the data is collected. This reduces the need to send data back and forth, which saves bandwidth and keeps private data local.
3. AI for Small Businesses
Gemma 3 makes advanced AI available to organizations with limited resources. Small and medium businesses can now use sophisticated AI without spending a fortune on cloud computing. They can run its applications on the computers they already have.
4. Educational Tools
Schools and universities can use it to help students learn about AI. Students can experiment with cutting-edge AI on regular school computers, and researchers can innovate without needing super expensive systems.
Getting Started With Gemma 3
Developers can try them instantly in their web browser using Google AI Studio. No complicated setup needed! They can also get an API key from Google AI Studio to use it with Google’s GenAI SDK.
For those who want to adapt it to their specific needs, the models are available for download from Hugging Face, Ollama, or Kaggle. You can easily fine-tune and adapt the model using Hugging Face’s Transformers library or other tools you prefer.
Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!
Alibaba Introduces VACE, The Ultimate AI Model That Takes Video Editing to the Next Level
Alibaba is on fire when it comes to AI. The company keeps dropping one AI model after another, including image generators, video generators, chatbots, and much more. Now, they have introduced VACE, a super cool all-in-one AI model for creating and editing videos. Whether you want to generate new videos, edit existing ones, or manipulate specific parts of a clip, VACE has got you covered. Most AI video tools focus on just one or two tasks, maybe simple editing, image generation, basic animation, or color adjustments. But Alibaba’s VACE does it all in one place.
Key Features of Alibaba VACE for Video Creation and Editing
VACE comes packed with amazing features that change how we make and edit videos. It handles tasks like reference-to-video generation (R2V), video-to-video editing (V2V), and masked video-to-video editing (MV2V). Moreover, it offers cool features like Move-Anything, Swap-Anything, Reference-Anything, Expand-Anything, and Animate-Anything.
1. Text-to-Video Generation (T2V)
VACE includes an amazing Text-to-Video Generation (T2V) feature, which is one of the most basic yet powerful video creation capabilities. You just provide a text prompt, and the video is generated accordingly.
2. Reference-to-Video Generation Feature
VACE’s Reference-to-Video (R2V) feature lets users generate new videos based on reference images. If you have a certain style or aesthetic in mind, VACE can analyze that and create videos that match it.
2. Video-to-Video Editing Feature
This feature lets users make changes to existing videos. It can help you apply a new visual style, change elements in a scene, or tweak colors. The best part? It does all of this while keeping edits smooth and natural, with no weird jumps or inconsistencies.
3. Masked Video-to-Video Editing Feature
This feature lets you edit specific parts of a video. You can define a specific area in the video and make changes to just that part, leaving the rest untouched. This makes it perfect for everything from fixing mistakes to adding new creative elements.
4. Move-Anything Feature
This feature lets users grab objects in a video and move them around while keeping everything looking smooth and natural. Just select, move, and watch the AI do the heavy lifting. It even understands perspective and occlusions, so objects blend right into their new spots without looking out of place.
5. Swap-Anything Feature
This feature swaps anything out of a video without it looking fake. Whether it’s changing a person’s outfit, replacing a background, or switching out objects, the AI ensures the new elements match the original’s motion, lighting, and surroundings. This is a game-changer, especially for virtual try-ons.
6. Reference-Anything Feature
This feature takes style transfer to the next level. Instead of just applying a filter, VACE lets users bring in colors, textures, and even composition elements from one video or image and apply them to another.
7. Expand-Anything Feature
This feature helps you adjust a video’s aspect ratio without awkward cropping or stretching. It extends the frame, generating new visuals that match the existing scene. Whether you’re repurposing a landscape video for a vertical format or adjusting a shot to fit different screens, this feature makes sure everything looks natural and cohesive.
8. Animate-Anything Feature
This feature turns still images into moving visuals. With Animate-Anything, VACE analyzes a static image, figures out what could move naturally, and creates realistic motion sequences. You can add subtle movement or full-blown animations. This is perfect for breathing life into any photo.
Performance Evaluation of VACE
What makes VACE stand out? Most AI models focus on just one or two specific tasks. VACE is being built to unify multiple video-editing functions within a single framework. To test its performance, researchers developed the VACE-Benchmark, a framework designed to evaluate video generation quality across multiple factors.
Compared to task-specific models like I2VGenXL, CogVideoX-I2V, ProPainter, and Control-A-Video, VACE has demonstrated competitive or even superior results in human and automated evaluations. The model showed impressive performance across aesthetic quality, background consistency, dynamic degree, imaging quality, motion smoothness, overall consistency, subject consistency, and temporal flickering, marking it as the best all-in-one tool.
Potential Applications of VACE
VACE has the potential to shake up multiple creative fields. Here’s how it could be used:
1. Film and Video Production
It can help streamline post-production workflows by enabling seamless editing and video generation.
2. Advertising
The Alibaba VACE can create high-quality video ads with specific reference materials and controlled stylistic elements.
3. Gaming and Animation
It can generate animated sequences or game cinematics based on reference imagery or existing footage.
4. Social Media Content
This video model can help creators quickly produce and edit high-quality videos for various platforms.
5. Virtual Reality
It can expand the possibilities for creating immersive visual experiences.
By combining multiple video editing and generation tools into one model, VACE could become a go-to solution for industries that need speed, quality, and creative flexibility
Accessibility and Availability
While VACE has been introduced, it’s not publicly available yet. But, the model and code are expected to be released soon, along with support for ComfyUI workflow, VACE-Benchmark, Wan-VACE Model Inference, and LTX-VACE Model Inference. If the early tests are any indication, this could be one of the biggest leaps in AI-driven video editing yet. Stay tuned for updates!
For more technical details, you can check the model paper.
Research is my hobby and I love to learn new skills. I make sure that every piece of content that you read on this blog is easy to understand and fact checked!
Don't Miss Out on AI Breakthroughs!
*No spam, no sharing, no selling. Just AI updates.