-
Portland, Oregon Business License: How-to Guideby Latest News & Stories From Around The World | Uber Blog on June 2, 2023
The Portland Business License is one of the driver requirements in Portland, Oregon. This guide walks you through the free application.
-
Gaming bacterial metabolismby Machine learning : nature.com subject feeds on June 2, 2023
-
Implement a multi-object tracking solution on a custom dataset with Amazon SageMakerby Gordon Wang (AWS Machine Learning Blog) on June 1, 2023
The demand for multi-object tracking (MOT) in video analysis has increased significantly in many industries, such as live sports, manufacturing, and traffic monitoring. For example, in live sports, MOT can track soccer players in real time to analyze physical performance such as real-time speed and moving distance. Since its introduction in 2021, ByteTrack remains to
-
A New Age: ‘Age of Empires’ Series Joins GeForce NOW, Part of 20 Games Coming in Juneby GeForce NOW Community (NVIDIA Blog) on June 1, 2023
The season of hot sun and longer days is here, so stay inside this summer with 20 games joining GeForce NOW in June. Or stream across devices by the pool, from grandma’s house or in the car — whichever way, GeForce NOW has you covered. Titles from the Age of Empires series are the next Read article >
-
Digital Renaissance: NVIDIA Neuralangelo Research Reconstructs 3D Scenesby Isha Salian (NVIDIA Blog) on June 1, 2023
Neuralangelo, a new AI model by NVIDIA Research for 3D reconstruction using neural networks, turns 2D video clips into detailed 3D structures — generating lifelike virtual replicas of buildings, sculptures and other real-world objects. Like Michelangelo sculpting stunning, life-like visions from blocks of marble, Neuralangelo generates 3D structures with intricate details and textures. Creative professionals Read article >
-
OpenAI cybersecurity grant programby OpenAI Blog on June 1, 2023
Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.
-
Structure-inducing pre-trainingby Machine learning : nature.com subject feeds on June 1, 2023
-
Translate documents in real time with Amazon Translateby Sathya Balakrishnan (AWS Machine Learning Blog) on May 31, 2023
A critical component of business success is the ability to connect with customers. Businesses today want to connect with their customers by offering their content across multiple languages in real time. For most customers, the content creation process is disconnected from the localization effort of translating content into multiple target languages. These disconnected processes delay
-
Scale your machine learning workloads on Amazon ECS powered by AWS Trainium instancesby Guilherme Ricci (AWS Machine Learning Blog) on May 31, 2023
Running machine learning (ML) workloads with containers is becoming a common practice. Containers can fully encapsulate not just your training code, but the entire dependency stack down to the hardware libraries and drivers. What you get is an ML development environment that is consistent and portable. With containers, scaling on a cluster becomes much easier.
-
Host ML models on Amazon SageMaker using Triton: CV model with PyTorch backendby Neelam Koshiya (AWS Machine Learning Blog) on May 31, 2023
PyTorch is a machine learning (ML) framework based on the Torch library, used for applications such as computer vision and natural language processing. One of the primary reasons that customers are choosing a PyTorch framework is its simplicity and the fact that it’s designed and assembled to work with Python. PyTorch supports dynamic computational graphs,
-
Configure and use defaults for Amazon SageMaker resources with the SageMaker Python SDKby Giuseppe Angelo Porcelli (AWS Machine Learning Blog) on May 31, 2023
The Amazon SageMaker Python SDK is an open-source library for training and deploying machine learning (ML) models on Amazon SageMaker. Enterprise customers in tightly controlled industries such as healthcare and finance set up security guardrails to ensure their data is encrypted and traffic doesn’t traverse the internet. To ensure the SageMaker training and deployment of
-
Accelerate your learning towards AWS Certification exams with automated quiz generation using Amazon SageMaker foundations modelsby Eitan Sela (AWS Machine Learning Blog) on May 31, 2023
Getting AWS Certified can help you propel your career, whether you’re looking to find a new role, showcase your skills to take on a new project, or become your team’s go-to expert. And because AWS Certification exams are created by experts in the relevant role or technical area, preparing for one of these exams helps
-
Improving mathematical reasoning with process supervisionby OpenAI Blog on May 31, 2023
We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”). In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans.
-
Amazon SageMaker XGBoost now offers fully distributed GPU trainingby Dhiraj Thakur (AWS Machine Learning Blog) on May 30, 2023
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning (ML) practitioners get started on training and deploying ML models quickly. You can use these algorithms and models for both supervised and unsupervised learning. They can process various types of input data, including tabular,
-
Analyze Amazon SageMaker spend and determine cost optimization opportunities based on usage, Part 5: Hostingby Deepali Rajale (AWS Machine Learning Blog) on May 30, 2023
In 2021, we launched AWS Support Proactive Services as part of the AWS Enterprise Support plan. Since its introduction, we have helped hundreds of customers optimize their workloads, set guardrails, and improve visibility of their machine learning (ML) workloads’ cost and usage. In this series of posts, we share lessons learned about optimizing costs in
-
Analyze Amazon SageMaker spend and determine cost optimization opportunities based on usage, Part 4: Training jobsby Deepali Rajale (AWS Machine Learning Blog) on May 30, 2023
In 2021, we launched AWS Support Proactive Services as part of the AWS Enterprise Support plan. Since its introduction, we’ve helped hundreds of customers optimize their workloads, set guardrails, and improve the visibility of their machine learning (ML) workloads’ cost and usage. In this series of posts, we share lessons learned about optimizing costs in
-
Analyze Amazon SageMaker spend and determine cost optimization opportunities based on usage, Part 3: Processing and Data Wrangler jobsby Deepali Rajale (AWS Machine Learning Blog) on May 30, 2023
In 2021, we launched AWS Support Proactive Services as part of the AWS Enterprise Support plan. Since its introduction, we’ve helped hundreds of customers optimize their workloads, set guardrails, and improve the visibility of their machine learning (ML) workloads’ cost and usage. In this series of posts, we share lessons learned about optimizing costs in
-
Analyze Amazon SageMaker spend and determine cost optimization opportunities based on usage, Part 2: SageMaker notebooks and Studioby Deepali Rajale (AWS Machine Learning Blog) on May 30, 2023
In 2021, we launched AWS Support Proactive Services as part of the AWS Enterprise Support offering. Since its introduction, we have helped hundreds of customers optimize their workloads, set guardrails, and improve the visibility of their machine learning (ML) workloads’ cost and usage. In this series of posts, we share lessons learned about optimizing costs
-
Analyze Amazon SageMaker spend and determine cost optimization opportunities based on usage, Part 1by Deepali Rajale (AWS Machine Learning Blog) on May 30, 2023
Cost optimization is one of the pillars of the AWS Well-Architected Framework, and it’s a continual process of refinement and improvement over the span of a workload’s lifecycle. It enables building and operating cost-aware systems that minimize costs, maximize return on investment, and achieve business outcomes. Amazon SageMaker is a fully managed machine learning (ML)
-
High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plusby Jesse Manders (AWS Machine Learning Blog) on May 30, 2023
Amazon SageMaker Ground Truth Plus helps you prepare high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements, and Ground Truth Plus sets up and manages your data labeling workflow based on these requirements. From
-
3D telemedicine brings better care to underserved and rural communities, even across continentsby Alyssa Hughes (Microsoft Research) on May 30, 2023
Providing healthcare in remote or rural areas is challenging, particularly specialized medicine and surgical procedures. Patients may need to travel long distances just to get to medical facilities and to communicate with caregivers. They may not arrive in time to receive essential information before their medical appointments and may have to return home before they can receive crucial follow-up care at the hospital. Some patients may wait several days just to meet with […]
-
NVIDIA RTX Transforming 14-Inch Laptops, Plus Simultaneous Screen Encoding and May Studio Driver Available Todayby Gerardo Delgado (NVIDIA Blog) on May 30, 2023
New 14-inch NVIDIA Studio laptops, equipped with GeForce RTX 40 Series Laptop GPUs, give creators peak portability with a significant increase in performance over the last generation.
-
MediaTek Partners With NVIDIA to Transform Automobiles With AI and Accelerated Computingby Danny Shapiro (NVIDIA Blog) on May 29, 2023
MediaTek, a leading innovator in connectivity and multimedia, is teaming with NVIDIA to bring drivers and passengers new experiences inside the car. The partnership was announced this week at a COMPUTEX press conference with MediaTek CEO Rick Tsai and NVIDIA founder and CEO Jensen Huang. “NVIDIA is a world-renowned pioneer and industry leader in AI Read article >
-
Live From Taipei: NVIDIA CEO Unveils Gen AI Platforms for Every Industryby Rick Merritt (NVIDIA Blog) on May 29, 2023
In his first live keynote since the pandemic, NVIDIA founder and CEO Jensen Huang today kicked off the COMPUTEX conference in Taipei, announcing platforms that companies can use to ride a historic wave of generative AI that’s transforming industries from advertising to manufacturing to telecom. “We’re back,” Huang roared as he took the stage after Read article >
-
NVIDIA Brings Advanced Autonomy to Mobile Robots With Isaac AMRby Shri Sundaram (NVIDIA Blog) on May 29, 2023
As mobile robot shipments surge to meet the growing demands of industries seeking operational efficiencies, NVIDIA is launching a new platform to enable the next generation of autonomous mobile robot (AMR) fleets. Isaac AMR brings advanced mapping, autonomy and simulation to mobile robots and will soon be available for early customers, NVIDIA founder and CEO Read article >
-
Techman Robot Selects NVIDIA Isaac Sim to Optimize Automated Optical Inspectionby Gerard Andrews (NVIDIA Blog) on May 29, 2023
How do you help robots build better robots? By simulating even more robots. NVIDIA founder and CEO Jensen Huang today showcased how leading electronics manufacturer Quanta is using AI-enabled robots to inspect the quality of its products. In his keynote speech at this week’s COMPUTEX trade show in Taipei, Huang presented on how electronics manufacturers Read article >
-
Electronics Giants Tap Into Industrial Automation With NVIDIA Metropolis for Factoriesby Adam Scraba (NVIDIA Blog) on May 29, 2023
The $46 trillion global electronics manufacturing industry spans more than 10 million factories worldwide, where much is at stake in producing defect-free products. To drive product excellence, leading electronics manufacturers are adopting NVIDIA Metropolis for Factories. More than 50 manufacturing giants and industrial automation providers — including Foxconn Industrial Internet, Pegatron, Quanta, Siemens and Wistron Read article >
-
NVIDIA Brings New Generative AI Capabilities, Groundbreaking Performance to 100 Million Windows RTX PCs and Workstationsby Jason Paul (NVIDIA Blog) on May 29, 2023
Generative AI is rapidly ushering in a new era of computing for productivity, content creation, gaming and more. Generative AI models and applications — like NVIDIA NeMo and DLSS 3 Frame Generation, Meta LLaMa, ChatGPT, Adobe Firefly and Stable Diffusion — use neural networks to identify patterns and structures within existing data to generate new Read article >
-
Machine learning in rare diseaseby Machine learning : nature.com subject feeds on May 29, 2023
-
NVIDIA CEO Tells NTU Grads to Run, Not Walk — But Be Prepared to Stumbleby Melody Tu (NVIDIA Blog) on May 27, 2023
“You are running for food, or you are running from becoming food. And often times, you can’t tell which. Either way, run.” NVIDIA founder and CEO Jensen Huang today urged graduates of National Taiwan University to run hard to seize the unprecedented opportunities that AI will present, but embrace the inevitable failures along the way. Read article >
-
Seattle Paid Sick and Safe Time Policy and Notice of Rightsby Latest News & Stories From Around The World | Uber Blog on May 27, 2023
Seattle Paid Sick and Safe Time Policy and Notice of Rights
-
Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMakerby Simon Zamarin (AWS Machine Learning Blog) on May 26, 2023
Text-to-image generation is a task in which a machine learning (ML) model generates an image from a textual description. The goal is to generate an image that closely matches the description, capturing the details and nuances of the text. This task is challenging because it requires the model to understand the semantics and syntax of
-
How to improve paratransit service delivery with TNCs: answers to your questionsby Latest News & Stories From Around The World | Uber Blog on May 26, 2023
Top 5 questions commonly asked by mobility managers interested in improving their paratransit service delivery with TNCs.
-
Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChainby Amit Arora (AWS Machine Learning Blog) on May 25, 2023
One of the most common applications of generative AI and large language models (LLMs) in an enterprise environment is answering questions based on the enterprise’s knowledge corpus. Amazon Lex provides the framework for building AI based chatbots. Pre-trained foundation models (FMs) perform well at natural language understanding (NLU) tasks such summarization, text generation and question
-
Get insights on your user’s search behavior from Amazon Kendra using an ML-powered serverless stackby Genta Watanabe (AWS Machine Learning Blog) on May 25, 2023
Amazon Kendra is a highly accurate and intelligent search service that enables users to search unstructured and structured data using natural language processing (NLP) and advanced search algorithms. With Amazon Kendra, you can find relevant answers to your questions quickly, without sifting through documents. However, just enabling end-users to get the answers to their queries
-
How OCX Cognition reduced ML model development time from weeks to days and model update time from days to real time using AWS Step Functions and Amazon SageMakerby Brian Curry (AWS Machine Learning Blog) on May 25, 2023
This post was co-authored by Brian Curry (Founder and Head of Products at OCX Cognition) and Sandhya MN (Data Science Lead at InfoGain) OCX Cognition is a San Francisco Bay Area-based startup, offering a commercial B2B software as a service (SaaS) product called Spectrum AI. Spectrum AI is a predictive (generative) CX analytics platform for
-
Cool It: Team Tackles the Thermal Challenge Data Centers Faceby Rick Merritt (NVIDIA Blog) on May 25, 2023
Two years after he spoke at a conference detailing his ambitious vision for cooling tomorrow’s data centers, Ali Heydari and his team won a $5 million grant to go build it. It was the largest of 15 awards in May from the U.S. Department of Energy. The DoE program, called COOLERCHIPS, received more than 100 Read article >
-
Butterfly Effects: Digital Artist Uses AI to Engage Exhibit Goersby Rick Merritt (NVIDIA Blog) on May 25, 2023
For about six years, AI has been an integral part of the artwork of Dominic Harris, a London-based digital artist who’s about to launch his biggest exhibition to date. “I use it for things like giving butterflies a natural sense of movement,” said Harris, whose typical canvas is an interactive computer display. Using a rack Read article >
-
3 new ways generative AI can help you searchby (AI) on May 25, 2023
Today, we’re starting to open up access to SGE (Search Generative Experience), one of our first experiments in Search Labs.
-
Three More Xbox PC Games Hit GeForce NOWby GeForce NOW Community (NVIDIA Blog) on May 25, 2023
Keep the NVIDIA and Microsoft party going this GFN Thursday with Grounded, Deathloop and Pentiment now available to stream for GeForce NOW members this week. These three Xbox titles are part of the dozen additions to the GeForce NOW library. Triple Threat NVIDIA and Microsoft’s partnership continues to flourish with this week’s game additions. Who Read article >
-
Democratic Inputs to AIby OpenAI Blog on May 25, 2023
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.
-
An early warning system for novel AI risksby DeepMind Blog on May 25, 2023
AI researchers already use a range of evaluation benchmarks to identify unwanted behaviours in AI systems, such as AI systems making misleading statements, biased decisions, or repeating copyrighted content. Now, as the AI community builds and deploys increasingly powerful AI, we must expand the evaluation portfolio to include the possibility of extreme risks from general-purpose AI models that have strong skills in manipulation, deception, cyber-offense, or other dangerous […]
-
Dialogue-guided intelligent document processing with foundation models on Amazon SageMaker JumpStartby Alfred Shen (AWS Machine Learning Blog) on May 24, 2023
Intelligent document processing (IDP) is a technology that automates the processing of high volumes of unstructured data, including text, images, and videos. IDP offers a significant improvement over manual methods and legacy optical character recognition (OCR) systems by addressing challenges such as cost, errors, low accuracy, and limited scalability, ultimately leading to better outcomes for
-
Optimizing HDFS with DataNode Local Cache for High-Density HDD Adoptionby Latest News & Stories From Around The World | Uber Blog on May 24, 2023
This blog post unveils the seamless, exabyte-scale integration of local SSD disks into the Hadoop Distributed File System (HDFS), enabling the utilization of high-density disk SKUs to optimize disk IO and achieving exceptional performance.
-
Automate document validation and fraud detection in the mortgage underwriting process using AWS AI services: Part 1by Anup Ravindranath (AWS Machine Learning Blog) on May 24, 2023
In this three-part series, we present a solution that demonstrates how you can automate detecting document tampering and fraud at scale using AWS AI and machine learning (ML) services for a mortgage underwriting use case. This solution rides on a more significant global wave of increasing mortgage fraud, which is worsening as more people present
-
Perform batch transforms with Amazon SageMaker Jumpstart Text2Text Generation large language modelsby Hemant Singh (AWS Machine Learning Blog) on May 24, 2023
Today we are excited to announce that you can now perform batch transforms with Amazon SageMaker JumpStart large language models (LLMs) for Text2Text Generation. Batch transforms are useful in situations where the responses don’t need to be real time and therefore you can do inference in batch for large datasets in bulk. For batch transform,
-
Research Focus: Week of May 22, 2023by Alyssa Hughes (Microsoft Research) on May 24, 2023
In this edition: New research explores the causal ability of LLMs and DNA storage in thermoresponsive capsules; a talk on human-centered AI; and a CFP for funding for LLM productivity research projects from the Microsoft New Future of Work Initiative. The post Research Focus: Week of May 22, 2023 appeared first on Microsoft Research.
-
Livestreaming Bliss: Wander Warwick’s World This Week ‘In the NVIDIA Studio’by Gerardo Delgado (NVIDIA Blog) on May 24, 2023
The GeForce RTX 4060 Ti 8GB GPU is now available from top add-in card providers including ASUS, Colorful, Galax, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.
-
Towards quantum machine learningby Machine learning : nature.com subject feeds on May 24, 2023
-
NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AIby Jesse Clayton (NVIDIA Blog) on May 23, 2023
Generative AI — in the form of large language model (LLM) applications like ChatGPT, image generators such as Stable Diffusion and Adobe Firefly, and game rendering techniques like NVIDIA DLSS 3 Frame Generation — is rapidly ushering in a new era of computing for productivity, content creation, gaming and more. At the Microsoft Build developer Read article >
-
No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rolloutsby Scott Martin (NVIDIA Blog) on May 23, 2023
Robotics hardware traditionally requires programmers to deploy it. READY Robotics wants to change that with its “no code” software aimed at people working in manufacturing who haven’t got programming skills. The Columbus, Ohio, startup is a spinout of robotics research from Johns Hopkins University. Kel Guerin was a PhD candidate there leading this research when Read article >
-
Privateer Space: The Final Frontier in AI Space Junk Managementby Brian Caulfield (NVIDIA Blog) on May 23, 2023
It’s time to take out the space trash. In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Fielding is a tech industry veteran, having previously worked alongside Apple co-founder Steve Wozniak on several projects, and holds a deep expertise Read article >
-
Uber Reveals 2022 “Airport of the Year” Award Winnersby Latest News & Stories From Around The World | Uber Blog on May 22, 2023
Uber discusses its partnership with Portland International Jetport Director Paul Bradbury,
-
Helping more people stay safe with flood forecastingby (AI) on May 22, 2023
AI-powered Flood Hub is expanding to nearly 80 countries worldwide.
-
Governance of superintelligenceby OpenAI Blog on May 22, 2023
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
-
What’s Up? Watts Down — More Science, Less Energyby Dion Harris (NVIDIA Blog) on May 22, 2023
People agree: accelerated computing is energy-efficient computing. The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy’s lead facility for open science, measured results across four of its key high performance computing and AI applications. They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated Read article >
-
A policy agenda for responsible AI progress: Opportunity, Responsibility, Securityby (AI) on May 19, 2023
For society to reap the benefits of AI, opportunity, responsibility, and national security strategies must be baked into that shared AI agenda.
-
REACT — A synergistic cloud-edge fusion architectureby Alyssa Hughes (Microsoft Research) on May 18, 2023
This research paper was accepted by the eighth ACM/IEEE Conference on Internet of Things Design and Implementation (IoTDI), which is a premier venue on IoT. The paper describes a framework that leverages cloud resources to execute large deep neural network (DNN) models with higher accuracy to improve the accuracy of models running on edge devices. The The post REACT — A synergistic cloud-edge fusion architecture appeared first on Microsoft Research.
-
Achieving Zero-COGS with Microsoft Editor Neural Grammar Checkerby Alyssa Hughes (Microsoft Research) on May 18, 2023
Microsoft Editor provides AI-powered writing assistance to millions of users around the world. One of its features that writers of all levels and domains rely on is the grammar checker, which detects grammar errors in a user’s writing and offers suggested corrections and explanations of the detected errors. The technology behind grammar checker has evolved The post Achieving Zero-COGS with Microsoft Editor Neural Grammar Checker appeared first on Microsoft Research.
-
Introducing the ChatGPT app for iOSby OpenAI Blog on May 18, 2023
The ChatGPT app syncs your conversations, supports voice input, and brings our latest model improvements to your fingertips.
-
A quantum computing partnership with the University of Chicago and the University of Tokyoby (AI) on May 17, 2023
The University of Tokyo, the University of Chicago and Google establish a strategic partnership on quantum information science and engineering.
-
How generational differences affect consumer attitudes towards adsby Meta Research on May 17, 2023
Our research study, in collaboration with CrowdDNA, aims to understand people's relationship with social media ads across different social media platforms.
-
Large-language models for automatic cloud incident managementby Alyssa Hughes (Microsoft Research) on May 16, 2023
This research was accepted by the IEEE/ACM International Conference on Software Engineering (ICSE), which is a forum for researchers, practitioners, and educators to gather, present, and discuss the most recent innovations, trends, experiences, and issues in the field of software engineering. The Microsoft 365 Systems Innovation research group has a paper accepted at the 45th The post Large-language models for automatic cloud incident management appeared first on Microsoft […]
-
How Central can help keep business moving across industriesby Latest News & Stories From Around The World | Uber Blog on May 15, 2023
Central on Uber for Business allows companies to request rides for executives, employees, clients, and guests.
-
Highlights from CHI 2023by Brenda Potts (Microsoft Research) on May 15, 2023
The ways in which people are able to interact with technologies can have a profound effect on a technology’s utility and adoptability. Building computing tools and services around people’s natural styles of work, communication, and play can give technology the value it needs to have meaningful impact. For decades, human-computer interaction (HCI) has examined the The post Highlights from CHI 2023 appeared first on Microsoft Research.
-
Microsoft at EuroSys 2023: Systems innovation across the stack to help support an easier, faster, safer, and smarter cloudby Brenda Potts (Microsoft Research) on May 12, 2023
EuroSys 2023 is the premier systems conference in Europe, and 2023 marks its 18th edition. Sponsored by ACM SIGOPS Europe and hosted May 8 to May 12, the conference covers a wide range of topics, including operating systems, real-time and networked systems, storage and middleware, and distributed, parallel, and embedded computing, as well as their The post Microsoft at EuroSys 2023: Systems innovation across the stack to help support an easier, faster, safer, and smarter […]
-
The Mother’s Day Shop and Pay with Uber Eats Gift Card Sweepstakes Official Rulesby Latest News & Stories From Around The World | Uber Blog on May 12, 2023
Complete a Shop & Pay trip and you will qualify to enter for a chance to win one of 100 $50 Uber Eats gift cards.
-
Uber Supports Congestion Pricingby Latest News & Stories From Around The World | Uber Blog on May 11, 2023
Uber supports congestion pricing. That’s why Uber riders have been paying a $2.75 congestion surcharge on every trip south of 96th St. since 2019. Now it’s time other road users pay, as well.
-
Cybersecurity Incident Simulation @ Uberby Latest News & Stories From Around The World | Uber Blog on May 11, 2023
We stand for safety and our approach to cybersecurity incident simulations is just one of the ways that we work to protect our riders, earners, eaters, and employees.
-
100 things we announced at I/O 2023by (AI) on May 11, 2023
Google I/O 2023 was filled with news and launches — here are 100 things announced at I/O.
-
Introducing Project Gameface: A hands-free, AI-powered gaming mouseby (AI) on May 10, 2023
Project Gameface, a new open-source hands-free gaming mouse has the potential to make gaming more accessible.
-
Play I/O FLIP, our AI-designed card gameby (AI) on May 10, 2023
Just in time for Google I/O 2023, try out I/O FLIP, an online card game built with generative AI.
-
Being bold on AI means being responsible from the startby (AI) on May 10, 2023
Google’s James Manyika discusses how Google responsibly applies AI to benefit people and society.
-
Test out Google features and products in Labsby (AI) on May 10, 2023
Get a first look at bold and responsible experiments across Google, and share feedback with the teams behind them.
-
Introducing PaLM 2by (AI) on May 10, 2023
Today at I/O 2023, Google introduced PaLM 2, a new language model with improved multilingual, reasoning, and coding capabilities.
-
Supercharging Search with generative AIby (AI) on May 10, 2023
We’re starting with an experiment in Search Labs called SGE, Search Generative Experience, which uses generative AI.
-
What’s ahead for Bard: More global, more visual, more integratedby (AI) on May 10, 2023
We’re ending the waitlist for Bard, adding support for more regions, introducing images and connecting with partner apps.
-
Magic Editor in Google Photos: New AI editing features for reimagining your photosby (AI) on May 10, 2023
Magic Editor is an experimental editing experience that uses AI to help reimagine your photos — early access is planned for select Pixel phones later this year.
-
New ways AI is making Maps more immersiveby (AI) on May 10, 2023
With advancements in AI, there are new ways to understand your route with Maps. Plus, new immersive tools for developers.
-
Turn ideas into music with MusicLMby (AI) on May 10, 2023
Starting today, you can sign up to try MusicLM, a new experimental AI tool that can turn your text descriptions into music.
-
Research Focus: Week of May 8, 2023by Brenda Potts (Microsoft Research) on May 10, 2023
In this issue: Microsoft researchers win four more awards; AutoRXN automates calculations of molecular systems; LLM accelerator losslessly improves the efficiency of autoregressive decoding; a frequency domain approach to predict power system transients. The post Research Focus: Week of May 8, 2023 appeared first on Microsoft Research.
-
Language models can explain neurons in language modelsby OpenAI Blog on May 9, 2023
We use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations. We release a dataset of these (imperfect) explanations and scores for every neuron in GPT-2.
-
How 4 startups are using AI to solve climate change challengesby (AI) on May 8, 2023
These startups are using AI to decarbonize buildings, create sustainable agriculture, protect biodiversity, and remove carbon from the atmosphere.
-
Using generative AI to imitate human behaviorby Alyssa Hughes (Microsoft Research) on May 4, 2023
Diffusion models have been used to generate photorealistic images and short videos, compose music, and synthesize speech. In a new paper, Microsoft Researchers explore how they can be used to imitate human behavior in interactive environments. The post Using generative AI to imitate human behavior appeared first on Microsoft Research.
-
Inferring rewards through interactionby Alyssa Hughes (Microsoft Research) on May 4, 2023
In reinforcement learning, handcrafting reward functions is difficult and can yield algorithms that don’t generalize well. IGL-P, an interaction-grounded learning strategy, learns personalized rewards for different people in recommender system scenarios. The post Inferring rewards through interaction appeared first on Microsoft Research.
-
Uber Eats and Kelce Jam Signed Football Sweeepstakesby Latest News & Stories From Around The World | Uber Blog on May 3, 2023
Enter for a chance to win a Kelce Jam football signed by Travis Kelce
-
Checks, Google’s AI-powered privacy platformby (AI) on May 3, 2023
Checks takes the complexity out of compliance, using AI to help companies quickly discover, communicate and fix issues.
-
Bootstrapping Uber’s Infrastructure on arm64 with Zigby Latest News & Stories From Around The World | Uber Blog on May 3, 2023
In this blog post we explain how we bootstrapped arm64 infrastructure using a relatively new toolchain in town: zig cc.
-
Try 4 new Arts and AI experimentsby (AI) on May 3, 2023
Four new online interactive artworks from Google Arts & Culture Lab artists in residence
-
Uber Eats and Detroit Pistons Sweepstakesby Latest News & Stories From Around The World | Uber Blog on April 28, 2023
Enter for a chance to win a July trip to Las Vegas with the Detroit Pistons!
-
DeepMind’s latest research at ICLR 2023by DeepMind Blog on April 27, 2023
Next week marks the start of the 11th International Conference on Learning Representations (ICLR), taking place 1-5 May in Kigali, Rwanda. This will be the first major artificial intelligence (AI) conference to be hosted in Africa and the first in-person event since the start of the pandemic. Researchers from around the world will gather to share their cutting-edge work in deep learning spanning the fields of AI, statistics and data science, and applications including […]
-
New ways to manage your data in ChatGPTby OpenAI Blog on April 25, 2023
ChatGPT users can now turn off chat history, allowing you to choose which conversations can be used to train our models.
-
How can we build human values into AI?by DeepMind Blog on April 24, 2023
As artificial intelligence (AI) becomes more powerful and more deeply integrated into our lives, the questions of how it is used and deployed are all the more important. What values guide AI? Whose values are they? And how are they selected?
-
Bard now helps you codeby (AI) on April 21, 2023
Bard can now help with programming and software development tasks, across more than 20 programming languages.
-
Google DeepMind: Bringing together two world-class AI teamsby (AI) on April 20, 2023
We announced some changes that will accelerate our progress in AI and help us develop more capable AI systems more safely and responsibly.
-
Measuring Performance for iOS Apps at Uber Scaleby Latest News & Stories From Around The World | Uber Blog on April 20, 2023
Curious about the magic behind Uber’s iOS app performance? Check out our blog post to learn how we overcame scalability challenges in our approach to measuring app reliability metrics.
-
Announcing Google DeepMindby DeepMind Blog on April 20, 2023
DeepMind and the Brain team from Google Research will join forces to accelerate progress towards a world in which AI helps solve the biggest challenges facing humanity.
-
Every tree countsby Meta Research on April 17, 2023
Meta set a goal to reach net zero emissions by 2030. We are developing technology to mitigate our carbon footprint and making these openly available.
-
How a non-traditional background led to cutting-edge XR techby Meta Research on April 14, 2023
-
InsureTech: Insurance Complianceby Latest News & Stories From Around The World | Uber Blog on April 13, 2023
Insurance regulatory compliance is critical for Uber. Discover the digital verification services, processes, and practices with insurance carrier partners that enable our teams to achieve our compliance goals.
-
A new, unique AI dataset for animating amateur drawingsby Meta Research on April 13, 2023
-
How the metaverse can transform educationby Meta Research on April 12, 2023
-
Announcing OpenAI’s Bug Bounty Programby OpenAI Blog on April 11, 2023
This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help.
-
Rider guide for the big 2023 music festivals in Coachella Valleyby Latest News & Stories From Around The World | Uber Blog on April 7, 2023
Information to help you plan your rides with Uber during the big 2023 music festivals in Coachella Valley.
-
Build faster with Buck2: Our open source build systemby Meta Research on April 6, 2023
-
Announcing the 2023 Meta Research PhD Fellowship award winnersby Meta Research on April 5, 2023
...
-
Our approach to AI safetyby OpenAI Blog on April 5, 2023
Ensuring that AI systems are built, deployed, and used safely is critical to our mission.
-
Driving during the big 2023 music festivals in Coachella Valleyby Latest News & Stories From Around The World | Uber Blog on April 4, 2023
Learn more about rideshare plans and earning opportunities for the big music festivals in Coachella Valley.
-
Announcing the winners of the 2022 Foundational Integrity Research request for proposalsby Meta Research on March 27, 2023
In September, Meta launched the Foundational Integrity Research request for proposals. Today, we announce the winners of this award.
-
Two meta sustainability grant and scholarship recipients share impactby Meta Research on March 24, 2023
-
March 20 ChatGPT outage: Here’s what happenedby OpenAI Blog on March 24, 2023
An update on our findings, the actions we’ve taken, and technical details of the bug.
-
ChatGPT pluginsby OpenAI Blog on March 23, 2023
We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.
-
GPT-4by OpenAI Blog on March 14, 2023
We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.
-
Introducing ChatGPT and Whisper APIsby OpenAI Blog on March 1, 2023
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.
-
Planning for AGI and beyondby OpenAI Blog on February 24, 2023
Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
-
How should AI systems behave, and who should decide?by OpenAI Blog on February 16, 2023
We’re clarifying how ChatGPT’s behavior is shaped and our plans for improving that behavior, allowing more user customization, and getting more public input into our decision-making in these areas.
-
Introducing ChatGPT Plusby OpenAI Blog on February 1, 2023
We’re launching a pilot subscription plan for ChatGPT, a conversational AI that can chat with you, answer follow-up questions, and challenge incorrect assumptions.
-
New AI classifier for indicating AI-written textby OpenAI Blog on January 31, 2023
We’re launching a classifier trained to distinguish between AI-written and human-written text.
-
OpenAI and Microsoft extend partnershipby OpenAI Blog on January 23, 2023
We’re happy to announce that OpenAI and Microsoft are extending our partnership.
-
Forecasting potential misuses of language models for disinformation campaigns and how to reduce riskby OpenAI Blog on January 11, 2023
OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report […]
-
The power of continuous learningby OpenAI Blog on December 23, 2022
Lilian Weng works on Applied AI Research at OpenAI.
-
Point-E: A system for generating 3D point clouds from complex promptsby OpenAI Blog on December 16, 2022
-
New and improved embedding modelby OpenAI Blog on December 15, 2022
We are excited to announce a new embedding model which is significantly more capable, cost effective, and simpler to use.
-
Discovering the minutiae of backend systemsby OpenAI Blog on December 8, 2022
Christian Gibson is an engineer on the Supercomputing team at OpenAI.
-
Competitive programming with AlphaCodeby DeepMind Blog on December 8, 2022
Solving novel problems and setting a new milestone in competitive programming.
-
AI for the board game Diplomacyby DeepMind Blog on December 6, 2022
Successful communication and cooperation have been crucial for helping societies advance throughout history. The closed environments of board games can serve as a sandbox for modelling and investigating interaction and communication – and we can learn a lot from playing them. In our recent paper, published today in Nature Communications, we show how artificial agents can use communication to better cooperate in the board game Diplomacy, a vibrant domain in artificial […]
-
Mastering Stratego, the classic game of imperfect informationby DeepMind Blog on December 1, 2022
Game-playing artificial intelligence (AI) systems have advanced to a new frontier. Stratego, the classic board game that’s more complex than chess and Go, and craftier than poker, has now been mastered. Published in Science, we present DeepNash, an AI agent that learned the game from scratch to a human expert level by playing against itself.
-
Introducing ChatGPTby OpenAI Blog on November 30, 2022
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
-
DeepMind’s latest research at NeurIPS 2022by DeepMind Blog on November 25, 2022
NeurIPS is the world’s largest conference in artificial intelligence (AI) and machine learning (ML), and we’re proud to support the event as Diamond sponsors, helping foster the exchange of research advances in the AI and ML community. Teams from across DeepMind are presenting 47 papers, including 35 external collaborations in virtual panels and poster sessions.
-
Building interactive agents in video game worldsby DeepMind Blog on November 23, 2022
Most artificial intelligence (AI) researchers now believe that writing computer code which can capture the nuances of situated interactions is impossible. Alternatively, modern machine learning (ML) researchers have focused on learning about these types of interactions from data. To explore these learning-based approaches and quickly build agents that can make sense of human instructions and safely perform actions in open-ended conditions, we created a research framework […]
-
Benchmarking the next generation of never-ending learnersby DeepMind Blog on November 22, 2022
Our new paper, NEVIS’22: A Stream of 100 Tasks Sampled From 30 Years of Computer Vision Research, proposes a playground to study the question of efficient knowledge transfer in a controlled and reproducible setting. The Never-Ending Visual classification Stream (NEVIS’22) is a benchmark stream in addition to an evaluation protocol, a set of initial baselines, and an open-source codebase. This package provides an opportunity for researchers to explore how models can […]
-
Best practices for data enrichmentby DeepMind Blog on November 16, 2022
At DeepMind, our goal is to make sure everything we do meets the highest standards of safety and ethics, in line with our Operating Principles. One of the most important places this starts with is how we collect our data. In the past 12 months, we’ve collaborated with Partnership on AI (PAI) to carefully consider these challenges, and have co-developed standardised best practices and processes for responsible human data collection.
-
The pursuit of AI education - past, present, and futureby DeepMind Blog on November 8, 2022
Meet Sylvia Christie, our education partnerships manager who’s played a leading role in expanding our scholarship programme, which is marking its five-year anniversary.
-
DALL·E API now available in public betaby OpenAI Blog on November 3, 2022
Starting today, developers can begin building apps with the DALL·E API.
-
Digital transformation with Google Cloudby DeepMind Blog on October 20, 2022
We’ve partnered with Google Cloud over the last few years to apply our AI research for making a positive impact on core solutions used by their customers. Here, we introduce a few of these projects, including optimising document understanding, enhancing the value of wind energy, and offering easier use of AlphaFold.
-
Scaling laws for reward model overoptimizationby OpenAI Blog on October 19, 2022
-
Measuring perception in AI modelsby DeepMind Blog on October 12, 2022
Perception – the process of experiencing the world through senses – is a significant part of intelligence. And building agents with human-level perceptual understanding of the world is a central but challenging task, which is becoming increasingly important in robotics, self-driving cars, personal assistants, medical imaging, and more. So today, we’re introducing the Perception Test, a multimodal benchmark using real-world videos to help evaluate the perception […]
-
How undesired goals can arise with correct rewardsby DeepMind Blog on October 7, 2022
As we build increasingly advanced artificial intelligence (AI) systems, we want to make sure they don’t pursue undesired goals. Such behaviour in an AI agent is often the result of specification gaming – exploiting a poor choice of what they are rewarded for. In our latest paper, we explore a more subtle mechanism by which AI systems may unintentionally learn to pursue undesired goals: goal misgeneralisation (GMG). GMG occurs when a system's capabilities generalise […]
-
Discovering novel algorithms with AlphaTensorby DeepMind Blog on October 5, 2022
In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices. This paper is a stepping stone in DeepMind’s mission to advance science and unlock the most fundamental problems using AI. Our […]
-
Supporting the next generation of AI leadersby DeepMind Blog on September 26, 2022
We’re partnering with six education charities and social enterprises in the United Kingdom (UK) to co-create a bespoke education programme to help tackle the gaps in STEM education and boost existing programmes.
-
Building safer dialogue agentsby DeepMind Blog on September 22, 2022
In our latest paper, we introduce Sparrow – a dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers. Our agent is designed to talk with a user, answer questions, and search the internet using Google when it’s helpful to look up evidence to inform its responses.
-
Introducing Whisperby OpenAI Blog on September 21, 2022
We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.
-
How our principles helped define AlphaFold’s releaseby DeepMind Blog on September 14, 2022
Our Operating Principles have come to define both our commitment to prioritising widespread benefit, as well as the areas of research and applications we refuse to pursue. These principles have been at the heart of our decision making since DeepMind was founded, and continue to be refined as the AI landscape changes and grows. They are designed for our role as a research-driven science company and consistent with Google’s AI principles.
-
Maximising the impact of our breakthroughsby DeepMind Blog on September 9, 2022
Colin, CBO at DeepMind, discusses collaborations with Alphabet and how we integrate ethics, accountability, and safety into everything we do.
-
My journey from DeepMind intern to mentorby DeepMind Blog on September 8, 2022
Former intern turned intern manager, Richard Everett, describes his journey to DeepMind, sharing tips and advice for aspiring DeepMinders. The 2023 internship applications will open on the 16th September, please visit https://dpmd.ai/internshipsatdeepmind for more information.
-
In conversation with AI: building better language modelsby DeepMind Blog on September 6, 2022
Our new paper, In conversation with AI: aligning language models with human values, explores a different approach, asking what successful communication between humans and an artificial conversational agent might look like and what values should guide conversation in these contexts.
-
From motor control to embodied intelligenceby DeepMind Blog on August 31, 2022
-
Advancing conservation with AI-based facial recognition of turtlesby DeepMind Blog on August 25, 2022
We came across Zindi – a dedicated partner with complementary goals – who are the largest community of African data scientists and host competitions that focus on solving Africa’s most pressing problems. Our Science team’s Diversity, Equity, and Inclusion (DE&I) team worked with Zindi to identify a scientific challenge that could help advance conservation efforts and grow involvement in AI. Inspired by Zindi’s bounding box turtle challenge, we landed on a […]
-
Discovering when an agent is present in a systemby DeepMind Blog on August 18, 2022
We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model […]
-
Realising scientists are the real superheroesby DeepMind Blog on August 11, 2022
Meet Edgar Duéñez-Guzmán, a research engineer on our Multi-Agent Research team who’s drawing on knowledge of game theory, computer science, and social evolution to get AI agents working better together.
-
Efficient training of language models to fill in the middleby OpenAI Blog on July 28, 2022
-
AlphaFold reveals the structure of the protein universeby DeepMind Blog on July 28, 2022
Today, in partnership with EMBL’s European Bioinformatics Institute (EMBL-EBI), we’re now releasing predicted structures for nearly all catalogued proteins known to science, which will expand the AlphaFold DB by over 200x - from nearly 1 million structures to over 200 million structures - with the potential to dramatically increase our understanding of biology.
-
A hazard analysis framework for code synthesis large language modelsby OpenAI Blog on July 25, 2022
-
The virtuous cycle of AI researchby DeepMind Blog on July 19, 2022
We recently caught up with Petar Veličković, a research scientist at DeepMind. Along with his co-authors, Petar is presenting his paper The CLRS Algorithmic Reasoning Benchmark at ICML 2022 in Baltimore, Maryland, USA.
-
Perceiver AR: general-purpose, long-context autoregressive generationby DeepMind Blog on July 16, 2022
We develop Perceiver AR, an autoregressive, modality-agnostic architecture which uses cross-attention to map long-range inputs to a small number of latents while also maintaining end-to-end causal masking. Perceiver AR can directly attend to over a hundred thousand tokens, enabling practical long-context density estimation without the need for hand-crafted sparsity patterns or memory mechanisms.
-
DeepMind’s latest research at ICML 2022by DeepMind Blog on July 15, 2022
Starting this weekend, the thirty-ninth International Conference on Machine Learning (ICML 2022) is meeting from 17-23 July, 2022 at the Baltimore Convention Center in Maryland, USA, and will be running as a hybrid event. Researchers working across artificial intelligence, data science, machine vision, computational biology, speech recognition, and more are presenting and publishing their cutting-edge work in machine learning.
-
Working together with YouTubeby DeepMind Blog on July 14, 2022
Applying our AI research to enhance the YouTube experience. Helping enrich people’s lives with our research, we’ve partnered with businesses across Alphabet to apply our technology towards improving the products and services used by billions of people every day.
-
Intuitive physics learning in a deep-learning model inspired by developmental psychologyby DeepMind Blog on July 11, 2022
Despite significant effort, current AI systems pale in their understanding of intuitive physics, in comparison to even very young children. In the present work, we address this AI problem, specifically by drawing on the field of developmental psychology.
-
Human-centred mechanism design with Democratic AIby DeepMind Blog on July 4, 2022
In our recent paper, published in Nature Human Behaviour, we provide a proof-of-concept demonstration that deep reinforcement learning (RL) can be used to find economic policies that people will vote for by majority in a simple game. The paper thus addresses a key challenge in AI research - how to train AI systems that align with human values.
-
DALL·E 2 pre-training mitigationsby OpenAI Blog on June 28, 2022
In order to share the magic of DALL·E 2 with a broad audience, we needed to reduce the risks associated with powerful image generation models. To this end, we put various guardrails in place to prevent generated images from violating our content policy.
-
Learning to play Minecraft with Video PreTrainingby OpenAI Blog on June 23, 2022
We trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using only a small amount of labeled contractor data. With fine-tuning, our model can learn to craft diamond tools, a task that usually takes proficient humans over 20 minutes (24,000 actions). Our model uses the native human interface of keypresses and mouse movements, making it quite general, and represents a step towards general […]
-
Leading a movement to strengthen machine learning in Africaby DeepMind Blog on June 23, 2022
-
BYOL-Explore: Exploration with Bootstrapped Predictionby DeepMind Blog on June 20, 2022
We present BYOL-Explore, a conceptually simple yet general approach for curiosity-driven exploration in visually-complex environments. BYOL-Explore learns a world representation, the world dynamics, and an exploration policy all-together by optimizing a single prediction loss in the latent space with no additional auxiliary objective. We show that BYOL-Explore is effective in DM-HARD-8, a challenging partially-observable continuous-action hard-exploration benchmark with […]
-
Evolution through large modelsby OpenAI Blog on June 17, 2022
-
Unlocking High-Accuracy Differentially Private Image Classification through Scaleby DeepMind Blog on June 17, 2022
According to empirical evidence from prior works, utility degradation in DP-SGD becomes more severe on larger neural network models – including the ones regularly used to achieve the best performance on challenging image classification benchmarks. Our work investigates this phenomenon and proposes a series of simple modifications to both the training procedure and model architecture, yielding a significant improvement on the accuracy of DP training on standard image […]
-
Bridging DeepMind research with Alphabet productsby DeepMind Blog on June 15, 2022
Today we caught up with Gemma Jennings, a product manager on the Applied team, who led a session on vision language models at the AI Summit, one of the world’s largest AI events for business.
-
AI-written critiques help humans notice flawsby OpenAI Blog on June 13, 2022
We trained “critique-writing” models to describe flaws in summaries. Human evaluators find flaws in summaries much more often when shown our model’s critiques. Larger models are better at self-critiquing, with scale improving critique-writing more than summary-writing. This shows promise for using AI systems to assist human supervision of AI systems on difficult tasks.
-
Techniques for training large neural networksby OpenAI Blog on June 9, 2022
Large neural networks are at the core of many recent advances in AI, but training them is a difficult engineering and research challenge which requires orchestrating a cluster of GPUs to perform a single synchronized calculation.
-
Advocating for the LGBTQ+ community in AI researchby DeepMind Blog on June 1, 2022
Research scientist, Kevin McKee, tells how his early love of science fiction and social psychology inspired his career, and how he’s helping advance research in ‘queer fairness’, support human-AI collaboration, and study the effects of AI on the LGBTQ+ community.
-
Teaching models to express their uncertainty in wordsby OpenAI Blog on May 28, 2022
-
Evaluating Multimodal Interactive Agentsby DeepMind Blog on May 27, 2022
In this paper, we assess the merits of these existing evaluation metrics and present a novel approach to evaluation called the Standardised Test Suite (STS). The STS uses behavioural scenarios mined from real human interaction data.
-
Dynamic language understanding: adaptation to new knowledge in parametric and semi-parametric modelsby DeepMind Blog on May 26, 2022
To study how semi-parametric QA models and their underlying parametric language models (LMs) adapt to evolving knowledge, we construct a new large-scale dataset, StreamingQA, with human written and generated questions asked on a given date, to be answered from 14 years of time-stamped news articles. We evaluate our models quarterly as they read new articles not seen in pre-training. We show that parametric models can be updated without full retraining, while avoiding […]
-
Kyrgyzstan to King’s Cross: the star baker cooking up codeby DeepMind Blog on May 26, 2022
My day can vary, it really depends on which phase of the project I'm on. Let’s say we want to add a feature to our product – my tasks could range from designing solutions and working with the team to find the best one, to deploying new features into production and doing maintenance. Along the way, I’ll communicate changes to our stakeholders, write docs, code and test solutions, build analytics dashboards, clean-up old code, and fix bugs.
-
Building a culture of pioneering responsiblyby DeepMind Blog on May 24, 2022
When I joined DeepMind as COO, I did so in large part because I could tell that the founders and team had the same focus on positive social impact. In fact, at DeepMind, we now champion a term that perfectly captures my own values and hopes for integrating technology into people’s daily lives: pioneering responsibly. I believe pioneering responsibly should be a priority for anyone working in tech. But I also recognise that it’s especially important when it comes to […]
-
Open-sourcing MuJoCoby DeepMind Blog on May 23, 2022
In October 2021, we announced that we acquired the MuJoCo physics simulator, and made it freely available for everyone to support research everywhere. We also committed to developing and maintaining MuJoCo as a free, open-source, community-driven project with best-in-class capabilities. Today, we’re thrilled to report that open sourcing is complete and the entire codebase is on GitHub! Here, we explain why MuJoCo is a great platform for open-source collaboration and share […]
-
From LEGO competitions to DeepMind's robotics labby DeepMind Blog on May 19, 2022
If you want to be at DeepMind, go for it. Apply, interview, and just try. You might not get it the first time but that doesn’t mean you can’t try again. I never thought DeepMind would accept me, and when they did, I thought it was a mistake. Everyone doubts themselves – I’ve never felt like the smartest person in the room. I’ve often felt the opposite. But I’ve learned that, despite those feelings, I do belong and I do deserve to work at a place like this. And […]
-
Emergent Bartering Behaviour in Multi-Agent Reinforcement Learningby DeepMind Blog on May 16, 2022
In our recent paper, we explore how populations of deep reinforcement learning (deep RL) agents can learn microeconomic behaviours, such as production, consumption, and trading of goods. We find that artificial agents learn to make economically rational decisions about production, consumption, and prices, and react appropriately to supply and demand changes.
-
A Generalist Agentby DeepMind Blog on May 12, 2022
Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button […]
-
Active offline policy selectionby DeepMind Blog on May 6, 2022
To make RL more applicable to real-world applications like robotics, we propose using an intelligent evaluation procedure to select the policy for deployment, called active offline policy selection (A-OPS). In A-OPS, we make use of the prerecorded dataset and allow limited interactions with the real environment to boost the selection quality.
-
Tackling multiple tasks with a single visual language modelby DeepMind Blog on April 28, 2022
We introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks.
-
When a passion for bass and brass help build better toolsby DeepMind Blog on April 28, 2022
We caught up with Kevin Millikin, a software engineer on the DevTools team. He’s in Salt Lake City this week to present at PyCon US, the largest annual gathering for those using and developing the open-source Python programming language.
-
DeepMind’s latest research at ICLR 2022by DeepMind Blog on April 25, 2022
Beyond supporting the event as sponsors and regular workshop organisers, our research teams are presenting 29 papers, including 10 collaborations this year. Here’s a brief glimpse into our upcoming oral, spotlight, and poster presentations.
-
Measuring Goodhart’s lawby OpenAI Blog on April 13, 2022
Goodhart’s law famously says: “When a measure becomes a target, it ceases to be a good measure.” Although originally from economics, it’s something we have to grapple with at OpenAI when figuring out how to optimize objectives that are difficult or costly to measure.
-
Hierarchical text-conditional image generation with CLIP latentsby OpenAI Blog on April 13, 2022
-
An empirical analysis of compute-optimal large language model trainingby DeepMind Blog on April 12, 2022
We ask the question: “What is the optimal model size and number of training tokens for a given compute budget?” To answer this question, we train models of various sizes and with various numbers of tokens, and estimate this trade-off empirically. Our main finding is that the current large language models are far too large for their compute budget and are not being trained on enough data.
-
GopherCite: Teaching language models to support answers with verified quotesby DeepMind Blog on March 16, 2022
Language models like Gopher can “hallucinate” facts that appear plausible but are actually fake. Those who are familiar with this problem know to do their own fact-checking, rather than trusting what language models say. Those who are not, may end up believing something that isn’t true. This paper describes GopherCite, a model which aims to address the problem of language model hallucination. GopherCite attempts to back up all of its factual claims with evidence from […]
-
Predicting the past with Ithacaby DeepMind Blog on March 9, 2022
The birth of human writing marked the dawn of History and is crucial to our understanding of past civilisations and the world we live in today. For example, more than 2,500 years ago, the Greeks began writing on stone, pottery, and metal to document everything from leases and laws to calendars and oracles, giving a detailed insight into the Mediterranean region. Unfortunately, it’s an incomplete record. Many of the surviving inscriptions have been damaged over the […]
-
Lessons learned on language model safety and misuseby OpenAI Blog on March 3, 2022
We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.
-
Learning Robust Real-Time Cultural Transmission without Human Databy DeepMind Blog on March 3, 2022
In this work, we use deep reinforcement learning to generate artificial agents capable of test-time cultural transmission. Once trained, our agents can infer and recall navigational knowledge demonstrated by experts. This knowledge transfer happens in real time and generalises across a vast space of previously unseen tasks.
-
Probing Image-Language Transformers for Verb Understandingby DeepMind Blog on February 23, 2022
Multimodal Image-Language transformers have achieved impressive results on a variety of tasks that rely on fine-tuning (e.g., visual question answering and image retrieval). We are interested in shedding light on the quality of their pretrained representations--in particular, if these models can distinguish verbs or they only use the nouns in a given sentence. To do so, we collect a dataset of image-sentence pairs consisting of 447 verbs that are either visual or commonly […]
-
Accelerating fusion science through learned plasma controlby DeepMind Blog on February 16, 2022
Successfully controlling the nuclear fusion plasma in a tokamak with deep reinforcement learning
-
MuZero’s first step from research into the real worldby DeepMind Blog on February 11, 2022
Collaborating with YouTube to optimise video compression in the open source VP9 codec.
-
Red Teaming Language Models with Language Modelsby DeepMind Blog on February 7, 2022
In our recent paper, we show that it is possible to automatically find inputs that elicit harmful text from language models by generating inputs using language models themselves. Our approach provides one tool for finding harmful model behaviours before users are impacted, though we emphasize that it should be viewed as one component alongside many other techniques that will be needed to find harms and mitigate them once found.
-
DeepMind: The Podcast returns for Season 2by DeepMind Blog on January 25, 2022
We believe artificial intelligence (AI) is one of the most significant technologies of our age and we want to help people understand its potential and how it’s being created.
-
Spurious normativity enhances learning of compliance and enforcement behavior in artificial agentsby DeepMind Blog on January 18, 2022
In our recent paper we explore how multi-agent deep reinforcement learning can serve as a model of complex social interactions, like the formation of social norms. This new class of models could provide a path to create richer, more detailed simulations of the world.
-
Simulating matter on the quantum scale with AIby DeepMind Blog on December 9, 2021
Solving some of the major challenges of the 21st Century, such as producing clean electricity or developing high temperature superconductors, will require us to design new materials with specific properties. To do this on a computer requires the simulation of electrons, the subatomic particles that govern how atoms bond to form molecules and are also responsible for the flow of electricity in solids.
-
Language modelling at scale: Gopher, ethical considerations, and retrievalby DeepMind Blog on December 8, 2021
Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans.
-
Creating Interactive Agents with Imitation Learningby DeepMind Blog on December 8, 2021
We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time. We further identify architectural and algorithmic techniques that improve performance, such as hierarchical action selection.
-
Improving language models by retrieving from trillions of tokensby DeepMind Blog on December 8, 2021
We explore an alternate path for improving language models: we augment transformers with retrieval over a database of text passages including web pages, books, news and code. We call our method RETRO, for “Retrieval Enhanced TRansfOrmers”.
-
Exploring the beauty of pure mathematics in novel waysby DeepMind Blog on December 1, 2021
More than a century ago, Srinivasa Ramanujan shocked the mathematical world with his extraordinary ability to see remarkable patterns in numbers that no one else could see. The self-taught mathematician from India described his insights as deeply intuitive and spiritual, and patterns often came to him in vivid dreams.
-
On the Expressivity of Markov Rewardby DeepMind Blog on December 1, 2021
Our main results prove that while reward can express many tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a reward function which allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists.
-
Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neuronsby DeepMind Blog on November 9, 2021
Our brain has an amazing ability to process visual information. We can take one glance at a complex scene, and within milliseconds be able to parse it into objects and their attributes, like colour or size, and use this information to describe the scene in simple language. Underlying this seemingly effortless ability is a complex computation performed by our visual cortex, which involves taking millions of neural impulses transmitted from the retina and transforming them […]
-
Real-world challenges for AGIby DeepMind Blog on November 2, 2021
When people picture a world with artificial general intelligence (AGI), robots are more likely to come to mind than enabling solutions to society’s most intractable problems. But I believe the latter is much closer to the truth. AI is already enabling huge leaps in tackling fundamental challenges: from solving protein folding to predicting accurate weather patterns, scientists are increasingly using AI to deduce the rules and principles that underpin highly complex […]
-
Opening up a physics simulator for roboticsby DeepMind Blog on October 18, 2021
When you walk, your feet make contact with the ground. When you write, your fingers make contact with the pen. Physical contacts are what makes interaction with the world possible. Yet, for such a common occurrence, contact is a surprisingly complex phenomenon. Taking place at microscopic scales at the interface of two bodies, contacts can be soft or stiff, bouncy or spongy, slippery or sticky. It’s no wonder our fingertips have four different types of touch-sensors. This […]
-
Stacking our way to more general robotsby DeepMind Blog on October 11, 2021
Picking up a stick and balancing it atop a log or stacking a pebble on a stone may seem like simple — and quite similar — actions for a person. However, most robots struggle with handling more than one such task at a time. Manipulating a stick requires a different set of behaviours than stacking stones, never mind piling various dishes on top of one another or assembling furniture. Before we can teach robots how to perform these kinds of tasks, they first need to learn […]
-
Predicting gene expression with AIby DeepMind Blog on October 4, 2021
When the Human Genome Project succeeded in mapping the DNA sequence of the human genome, the international research community were excited by the opportunity to better understand the genetic instructions that influence human health and development. DNA carries the genetic information that determines everything from eye colour to susceptibility to certain diseases and disorders. The roughly 20,000 sections of DNA in the human body known as genes contain instructions about the […]
-
Nowcasting the next hour of rainby DeepMind Blog on September 29, 2021
Our lives are dependent on the weather. At any moment in the UK, according to one study, one third of the country has talked about the weather in the past hour, reflecting the importance of weather in daily life. Amongst weather phenomena, rain is especially important because of its influence on our everyday decisions. Should I take an umbrella? How should we route vehicles experiencing heavy rain? What safety measures do we take for outdoor events? Will there be a flood? […]
-
Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Explorationby DeepMind Blog on September 17, 2021
We argue that merely using curiosity for fast environment exploration or as a bonus reward for a specific task does not harness the full potential of this technique and misses useful skills. Instead, we propose to shift the focus towards retaining the behaviours which emerge during curiosity-based learning. We posit that these self-discovered behaviours serve as valuable skills in an agent’s repertoire to solve related tasks.
-
Challenges in Detoxifying Language Modelsby DeepMind Blog on September 15, 2021
In our paper, we focus on LMs and their propensity to generate toxic language. We study the effectiveness of different methods to mitigate LM toxicity, and their side-effects, and we investigate the reliability and limits of classifier-based automatic toxicity evaluation.
-
Building architectures that can handle the world’s databy DeepMind Blog on August 3, 2021
Most architectures used by AI systems today are specialists. A 2D residual network may be a good choice for processing images, but at best it’s a loose fit for other kinds of data — such as the Lidar signals used in self-driving cars or the torques used in robotics. What’s more, standard architectures are often designed with only one task in mind, often leading engineers to bend over backwards to reshape, distort, or otherwise modify their inputs and outputs in hopes […]
-
Generally capable agents emerge from open-ended playby DeepMind Blog on July 27, 2021
In recent years, artificial intelligence agents have succeeded in a range of complex game environments. For instance, AlphaZero beat world-champion programs in chess, shogi, and Go after starting out with knowing no more than the basic rules of how to play. Through reinforcement learning (RL), this single system learnt by playing round after round of games through a repetitive process of trial and error. But AlphaZero still trained separately on each game — unable to […]
-
Putting the power of AlphaFold into the world’s handsby DeepMind Blog on July 22, 2021
When we announced AlphaFold 2 last December, it was hailed as a solution to the 50-year old protein folding problem. Last week, we published the scientific paper and source code explaining how we created this highly innovative system, and today we’re sharing high-quality predictions for the shape of every single protein in the human body, as well as for the proteins of 20 additional organisms that scientists rely on for their research.
-
Enabling high-accuracy protein structure prediction at the proteome scaleby DeepMind Blog on July 22, 2021
Many novel machine learning innovations contribute to AlphaFold’s current level of accuracy. We give a high-level overview of the system below; for a technical description of the network architecture see our AlphaFold methods paper and especially its extensive Supplementary Information.
-
Melting Pot: an evaluation suite for multi-agent reinforcement learningby DeepMind Blog on July 14, 2021
Here we introduce Melting Pot, a scalable evaluation suite for multi-agent reinforcement learning. Melting Pot assesses generalisation to novel social situations involving both familiar and unfamiliar individuals, and has been designed to test a broad range of social interactions such as: cooperation, competition, deception, reciprocation, trust, stubbornness and so on. Melting Pot offers researchers a set of 21 MARL “substrates” (multi-agent games) on which to train […]
-
An update on our racial justice effortsby DeepMind Blog on June 4, 2021
In June 2020, after George Floyd was killed in Minneapolis (USA) and the solidarity that followed as millions spoke out at Black Lives Matter protests around the world, I – like many others – reflected on the situation and how our organisation could contribute. I then shared some thoughts around DeepMind's intention to help combat racism and advance racial equity.
-
Advancing sports analytics through AI researchby DeepMind Blog on May 7, 2021
Creating testing environments to help progress AI research out of the lab and into the real world is immensely challenging. Given AI’s long association with games, it is perhaps no surprise that sports presents an exciting opportunity, offering researchers a testbed in which an AI-enabled system can assist humans in making complex, real-time decisions in a multiagent environment with dozens of dynamic, interacting individuals.
-
Game theory as an engine for large-scale data analysisby DeepMind Blog on May 6, 2021
Modern AI systems approach tasks like recognising objects in images and predicting the 3D structure of proteins as a diligent student would prepare for an exam. By training on many example problems, they minimise their mistakes over time until they achieve success. But this is a solitary endeavour and only one of the known forms of learning. Learning also takes place by interacting and playing with others. It’s rare that a single individual can solve extremely complex […]
-
Alchemy: A structured task distribution for meta-reinforcement learningby DeepMind Blog on February 8, 2021
There has been rapidly growing interest in developing methods for meta-learning within deep RL. Although there has been substantive progress toward such ‘meta-reinforcement learning,’ research in this area has been held back by a shortage of benchmark tasks. In the present work, we aim to ease this problem by introducing (and open-sourcing) Alchemy, a useful new benchmark environment for meta-RL, along with a suite of analysis tools.
-
Data, Architecture, or Losses: What Contributes Most to Multimodal Transformer Success?by DeepMind Blog on February 2, 2021
In this work, we examine what aspects of multimodal transformers – attention, losses, and pretraining data – are important in their success at multimodal pretraining. We find that Multimodal attention, where both language and image transformers attend to each other, is crucial for these models’ success. Models with other types of attention (even with more depth or parameters) fail to achieve comparable results to shallower and smaller models with multimodal attention.
-
MuZero: Mastering Go, chess, shogi and Atari without rulesby DeepMind Blog on December 23, 2020
In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go. Two years later, its successor - AlphaZero - learned from scratch to master Go, chess and shogi. Now, in a paper in the journal Nature, we describe MuZero, a significant step forward in the pursuit of general-purpose algorithms. MuZero masters Go, chess, shogi and Atari without needing to be told the rules, thanks to its ability to plan winning […]
-
Imitating Interactive Intelligenceby DeepMind Blog on December 11, 2020
We first create a simulated environment, the Playroom, in which virtual robots can engage in a variety of interesting interactions by moving around, manipulating objects, and speaking to each other. The Playroom’s dimensions can be randomised as can its allocation of shelves, furniture, landmarks like windows and doors, and an assortment of children's toys and domestic objects. The diversity of the environment enables interactions involving reasoning about space and object […]