-
Secure Amazon SageMaker Studio presigned URLs Part 1: Foundational infrastructureby Ram Vittal (AWS Machine Learning Blog) on June 30, 2022
You can access Amazon SageMaker Studio notebooks from the Amazon SageMaker console via AWS Identity and Access Management (IAM) authenticated federation from your identity provider (IdP), such as Okta. When a Studio user opens the notebook link, Studio validates the federated user’s IAM policy to authorize access, and generates and resolves the presigned URL for
-
Secure Amazon SageMaker Studio presigned URLs Part 2: Private API with JWT authenticationby Ram Vittal (AWS Machine Learning Blog) on June 30, 2022
In part 1 of this series, we demonstrated how to resolve an Amazon SageMaker Studio presigned URL from a corporate network using Amazon private VPC endpoints without traversing the internet. In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API
-
Three Wheeling: Startup Faction Develops Affordable Tri-Wheel AVs on NVIDIA DRIVEby Katie Burke (NVIDIA Blog) on June 30, 2022
Some things are easy as A, B, C. But when it comes to autonomous vehicles, the key may be in one, two, three. Faction, a Bay Area-based startup and NVIDIA Inception member, is preparing to debut its business-to-business autonomous delivery service, accelerating its commercial deployment with three-wheel production electric vehicles purpose-built for driverless services. In Read article > The post Three Wheeling: Startup Faction Develops Affordable Tri-Wheel AVs on NVIDIA […]
-
Mahima Pushkarna is making data easier to understandby (AI) on June 30, 2022
Five years ago, information designer Mahima Pushkarna joined Google to make data easier to understand. As a senior interaction designer on the People + AI Research (PAIR) team, she designed Data Cards to help everyone better understand the contexts of the data they are using. The Data Cards Playbook puts Google’s AI Principles into practice by providing opportunities for feedback, relevant explanations and appeal.Recently, Mahima’s paper on Data Cards (co-written with […]
-
The Gaming Evolution Will Be Televised: GFN Thursday Levels Up the Living Room Experience on New Samsung TVs and Moreby GeForce NOW Community (NVIDIA Blog) on June 30, 2022
Turn the TV on. GeForce NOW is leveling up gaming in the living room. The Samsung Gaming Hub launched today, delivering GeForce NOW natively on 2022 Samsung Smart TVs. Plus, the SHIELD Software Experience Upgrade 9.1 is now rolling out to all NVIDIA SHIELD TVs, delivering new gaming features that improve GeForce NOW. Great living Read article > The post The Gaming Evolution Will Be Televised: GFN Thursday Levels Up the Living Room Experience on New Samsung TVs and More […]
-
Reflections from ethics and safety ‘on the ground’ at DeepMindby DeepMind Blog on June 30, 2022
Boxi shares their experiences working as a program specialist on the ethics & society team to support ethical, safe and beneficial AI development, highlighting the importance of interdisciplinary and sociotechnical thinking.
-
Use a custom image to bring your own development environment to RStudio on Amazon SageMakerby Michael Hsieh (AWS Machine Learning Blog) on June 29, 2022
RStudio on Amazon SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio integrated development environment (IDE), and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. RStudio on
-
The Metaverse Goes Industrial: Siemens, NVIDIA Extend Partnership to Bring Digital Twins Within Easy Reachby Rev Lebaredian (NVIDIA Blog) on June 29, 2022
Silicon Valley magic met Wednesday with 175 years of industrial technology leadership as Siemens CEO Roland Busch and NVIDIA Founder and CEO Jensen Huang shared their vision for an “industrial metaverse” at the launch of the Siemens Xcelerator business platform in Munich. “When we combine the real and digital worlds we can achieve new levels Read article > The post The Metaverse Goes Industrial: Siemens, NVIDIA Extend Partnership to Bring Digital Twins Within Easy […]
-
Text classification for online conversations with machine learning on AWSby Ryan Brand (AWS Machine Learning Blog) on June 29, 2022
Online conversations are ubiquitous in modern life, spanning industries from video games to telecommunications. This has led to an exponential growth in the amount of online conversation data, which has helped in the development of state-of-the-art natural language processing (NLP) systems like chatbots and natural language generation (NLG) models. Over time, various NLP techniques for
-
NVIDIA, Partners Show Leading AI Performance and Versatility in MLPerfby Shar Narasimhan (NVIDIA Blog) on June 29, 2022
NVIDIA and its partners continued to provide the best overall AI training performance and the most submissions across all benchmarks with 90% of all entries coming from the ecosystem, according to MLPerf benchmarks released today. The NVIDIA AI platform covered all eight benchmarks in the MLPerf Training 2.0 round, highlighting its leading versatility. No other Read article > The post NVIDIA, Partners Show Leading AI Performance and Versatility in MLPerf appeared first on […]
-
Hyperparameter optimization for fine-tuning pre-trained transformer models from Hugging Faceby Aaron Klein (AWS Machine Learning Blog) on June 29, 2022
Large attention-based transformer models have obtained massive gains on natural language processing (NLP). However, training these gigantic networks from scratch requires a tremendous amount of data and compute. For smaller NLP datasets, a simple yet effective strategy is to use a pre-trained transformer, usually trained in an unsupervised fashion on very large datasets, and fine-tune
-
Diagnose model performance before deployment for Amazon Fraud Detectorby Julia Xu (AWS Machine Learning Blog) on June 29, 2022
With the growth in adoption of online applications and the rising number of internet users, digital fraud is on the rise year over year. Amazon Fraud Detector provides a fully managed service to help you better identify potentially fraudulent online activities using advanced machine learning (ML) techniques, and more than 20 years of fraud detection
-
Reducing gender-based harms in AI with Sunipa Devby (AI) on June 29, 2022
Natural language processing (NLP) is a form of artificial intelligence that teaches computer programs how to take in, interpret, and produce language from large data sets. For example, grammar checkers use NLP to come up with grammar suggestions that help people write grammatically correct phrases. But as Google’s AI Principles note, it’s sometimes necessary to have human intervention to identify risks of unfair bias.Sunipa Dev is a research scientist at Google who […]
-
Introducing the Microsoft Climate Research Initiativeby Brenda Potts (Microsoft Research) on June 29, 2022
Addressing and mitigating the effects of climate change requires a collective effort, bringing our strengths to bear across industry, government, academia, and civil society. The post Introducing the Microsoft Climate Research Initiative appeared first on Microsoft Research.
-
NVIDIA Studio Driver Elevates Creative Workflows in Blender 3.2, BorisFX Sapphire and Topaz Denoise AIby Stanley Tack (NVIDIA Blog) on June 29, 2022
The June NVIDIA Studio Driver is available for download today, optimizing the latest creative app updates, all with the stability and reliability that users count on. Creators with NVIDIA RTX GPUs will benefit from faster performance and new features within Blender version 3.2, BorisFX Sapphire release 2022.5 and Topaz Denoise AI 3.7.0. The post NVIDIA Studio Driver Elevates Creative Workflows in Blender 3.2, BorisFX Sapphire and Topaz Denoise AI appeared first on NVIDIA […]
-
Q&A with Stratis Ioannidis, associate professor at Northeastern University and Meta academic collaboratorby Meta Research on June 29, 2022
Q&A with Stratis Ioannidis, associate professor at Northeastern University and Meta academic collaborator
-
Create audio for content in multiple languages with the same TTS voice persona in Amazon Pollyby Patryk Wainaina (AWS Machine Learning Blog) on June 28, 2022
Amazon Polly is a leading cloud-based service that converts text into lifelike speech. Following the adoption of Neural Text-to-Speech (NTTS), we have continuously expanded our portfolio of available voices in order to provide a wide selection of distinct speakers in supported languages. Today, we are pleased to announce four new additions: Pedro speaking US Spanish,
-
New built-in Amazon SageMaker algorithms for tabular data modeling: LightGBM, CatBoost, AutoGluon-Tabular, and TabTransformerby Xin Huang (AWS Machine Learning Blog) on June 28, 2022
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning (ML) practitioners get started on training and deploying ML models quickly. You can use these algorithms and models for both supervised and unsupervised learning. They can process various types of input data, including tabular,
-
DALL·E 2 Pre-Training Mitigationsby Alex Nichol (OpenAI) on June 28, 2022
In order to share the magic of DALL·E 2 with a broad audience, we needed to reduce the risks associated with powerful image generation models. To this end, we put various guardrails in place to prevent generated images from violating our content policy. This post focuses on pre-training
-
NVIDIA Teams With HPE to Take AI From Edge to Cloudby Nicola Sessions (NVIDIA Blog) on June 28, 2022
Enterprises now have a new option for quickly getting started with NVIDIA AI software: the HPE GreenLake edge-to-cloud platform. The NVIDIA AI Enterprise software suite is an end-to-end, cloud-native suite of AI and data analytics software. It’s optimized to enable any organization to use AI, and doesn’t require deep AI expertise. Fully supported by NVIDIA, Read article > The post NVIDIA Teams With HPE to Take AI From Edge to Cloud appeared first on NVIDIA Blog.
-
Semantic segmentation data labeling and model training using Amazon SageMakerby Kara Yang (AWS Machine Learning Blog) on June 28, 2022
In computer vision, semantic segmentation is the task of classifying every pixel in an image with a class from a known set of labels such that pixels with the same label share certain characteristics. It generates a segmentation mask of the input images. For example, the following images show a segmentation mask of the cat
-
Deep demand forecasting with Amazon SageMakerby Alak Eswaradass (AWS Machine Learning Blog) on June 28, 2022
Every business needs the ability to predict the future accurately in order to make better decisions and give the company a competitive advantage. With historical data, businesses can understand trends, make predictions of what might happen and when, and incorporate that information into their future plans, from product demand to inventory planning and staffing. If
-
Detect to Protect: Taiwan Hospital Deploys Real-Time AI Risk Prediction for Kidney Patientsby Mona Flores (NVIDIA Blog) on June 28, 2022
Taiwan has nearly 85,000 kidney dialysis patients — the highest prevalence in the world based on population density. Taipei Veterans General Hospital (TVGH) is working to improve outcomes for these patients with an AI model that predicts heart failure risk in real time during dialysis procedures. Cardiovascular disease is the leading cause of death for Read article > The post Detect to Protect: Taiwan Hospital Deploys Real-Time AI Risk Prediction for Kidney Patients […]
-
Inspect your data labels with a visual, no code tool to create high-quality training datasets with Amazon SageMaker Ground Truth Plusby Manish Goel (AWS Machine Learning Blog) on June 27, 2022
Launched at AWS re:Invent 2021, Amazon SageMaker Ground Truth Plus helps you create high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements, and Ground Truth Plus sets up and manages your data labeling workflow
-
Choose specific timeseries to forecast with Amazon Forecastby Meetish Dave (AWS Machine Learning Blog) on June 24, 2022
Today, we’re excited to announce that Amazon Forecast offers the ability to generate forecasts on a selected subset of items. This helps you to leverage the full value of your data, and apply it selectively on your choice of items reducing the time and effort to get forecasted results. Generating a forecast on ‘all’ items of the
-
Improve ML developer productivity with Weights & Biases: A computer vision example on Amazon SageMakerby Thomas Capelle (AWS Machine Learning Blog) on June 24, 2022
This post is co-written with Thomas Capelle at Weights & Biases. As more organizations use deep learning techniques such as computer vision and natural language processing, the machine learning (ML) developer persona needs scalable tooling around experiment tracking, lineage, and collaboration. Experiment tracking includes metadata such as operating system, infrastructure used, library, and input and
-
How Cepsa used Amazon SageMaker and AWS Step Functions to industrialize their ML projects and operate their models at scaleby Guillermo Ribeiro Jimenez (AWS Machine Learning Blog) on June 24, 2022
This blog post is co-authored by Guillermo Ribeiro, Sr. Data Scientist at Cepsa. Machine learning (ML) has rapidly evolved from being a fashionable trend emerging from academic environments and innovation departments to becoming a key means to deliver value across businesses in every industry. This transition from experiments in laboratories to solving real-world problems in
-
Analyze and tag assets stored in Veeva Vault PromoMats using Amazon AppFlow and Amazon AI Servicesby Anamaria Todor (AWS Machine Learning Blog) on June 24, 2022
In a previous post, we talked about analyzing and tagging assets stored in Veeva Vault PromoMats using Amazon AI services and the Veeva Vault Platform’s APIs. In this post, we explore how to use Amazon AppFlow, a fully managed integration service that enables you to securely transfer data from software as a service (SaaS) applications
-
MLOps foundation roadmap for enterprises with Amazon SageMakerby Sokratis Kartakis (AWS Machine Learning Blog) on June 24, 2022
As enterprise businesses embrace machine learning (ML) across their organizations, manual workflows for building, training, and deploying ML models tend to become bottlenecks to innovation. To overcome this, enterprises needs to shape a clear operating model defining how multiple personas, such as data scientists, data engineers, ML engineers, IT, and business stakeholders, should collaborate and
-
Introducing Amazon CodeWhisperer, the ML-powered coding companionby Ankur Desai (AWS Machine Learning Blog) on June 24, 2022
We are excited to announce Amazon CodeWhisperer, a machine learning (ML)-powered service that helps improve developer productivity by providing code recommendations based on developers’ natural comments and prior code. With CodeWhisperer, developers can simply write a comment that outlines a specific task in plain English, such as “upload a file to S3.” Based on this,
-
Finding NeMo: Sensory Taps NVIDIA AI for Voice and Vision Applicationsby Scott Martin (NVIDIA Blog) on June 24, 2022
You may not know of Todd Mozer, but it’s likely you have experienced his company: It has enabled voice and vision AI for billions of consumer electronics devices worldwide. Sensory, started in 1994 from Silicon Valley, is a pioneer of compact models used in mobile devices from the industry’s giants. Today Sensory brings interactivity to Read article > The post Finding NeMo: Sensory Taps NVIDIA AI for Voice and Vision Applications appeared first on NVIDIA Blog.
-
Manage AutoML workflows with AWS Step Functions and AutoGluon on Amazon SageMakerby Federico Piccinini (AWS Machine Learning Blog) on June 24, 2022
Running machine learning (ML) experiments in the cloud can span across many services and components. The ability to structure, automate, and track ML experiments is essential to enable rapid development of ML models. With the latest advancements in the field of automated machine learning (AutoML), namely the area of ML dedicated to the automation of
-
UN Satellite Centre Works With NVIDIA to Boost Sustainable Development Goalsby Angie Lee (NVIDIA Blog) on June 24, 2022
To foster climate action for a healthy global environment, NVIDIA is working with the United Nations Satellite Centre (UNOSAT) to apply the powers of deep learning and AI. The effort supports the UN’s 2030 Agenda for Sustainable Development, which has at its core 17 interrelated Sustainable Development Goals. These SDGs — which include “climate action” Read article > The post UN Satellite Centre Works With NVIDIA to Boost Sustainable Development Goals appeared […]
-
Meta PhD Fellowship Spotlight: Empowering Older Adults and Combating Bias in Tech Developmentby Meta Research on June 24, 2022
As a continuation of our Fellowship spotlight series, we’re highlighting a 2021 Meta PhD Fellow in Human-Centered Computing, Reza Ghaiumy Anaraky.
-
Family Style: Li Auto L9 Brings Top-Line Luxury and Intelligence to Full-Size SUV With NVIDIA DRIVE Orinby Katie Burke (NVIDIA Blog) on June 23, 2022
Finally, there’s a family car any kid would want to be seen in. Beijing-based startup Li Auto this week rolled out its second electric vehicle, the L9. It’s a full-size SUV decked out with the latest intelligent driving technology. With AI features and an extended battery range of more than 800 miles, the L9 promises Read article > The post Family Style: Li Auto L9 Brings Top-Line Luxury and Intelligence to Full-Size SUV With NVIDIA DRIVE Orin appeared first on NVIDIA […]
-
Import data from cross-account Amazon Redshift in Amazon SageMaker Data Wrangler for exploratory data analysis and data preparationby Meenakshisundaram Thandavarayan (AWS Machine Learning Blog) on June 23, 2022
Organizations moving towards a data-driven culture embrace the use of data and machine learning (ML) in decision-making. To make ML-based decisions from data, you need your data available, accessible, clean, and in the right format to train ML models. Organizations with a multi-account architecture want to avoid situations where they must extract data from one
-
Predict types of machine failures with no-code machine learning using Amazon SageMaker Canvasby Rajakumar Sampathkumar (AWS Machine Learning Blog) on June 23, 2022
Predicting common machine failure types is critical in manufacturing industries. Given a set of characteristics of a product that is tied to a given type of failure, you can develop a model that can predict the failure type when you feed those attributes to a machine learning (ML) model. ML can help with insights, but
-
Learning to Play Minecraft with Video PreTraining (VPT)by Bowen Baker (OpenAI) on June 23, 2022
We trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using only a small amount of labeled contractor data. With fine-tuning, our model can learn to craft diamond tools, a task that usually takes proficient humans over
-
GODEL: Combining goal-oriented dialog with real-world conversationsby Alyssa Hughes (Microsoft Research) on June 23, 2022
They make restaurant recommendations, help us pay bills, and remind us of appointments. Many people have come to rely on virtual assistants and chatbots to perform a wide range of routine tasks. But what if a single dialog agent, the technology behind these language-based apps, could perform all these tasks and then take the conversation The post GODEL: Combining goal-oriented dialog with real-world conversations appeared first on Microsoft Research.
-
Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCsby GeForce NOW Community (NVIDIA Blog) on June 23, 2022
Thanks to the GeForce cloud, even Mac users can be PC gamers. This GFN Thursday, fire up your Macbook and get your game on. This week brings eight more games to the GeForce NOW library. Plus, members can play Genshin Impact and claim a reward to start them out on their journeys streaming on GeForce Read article > The post Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCs appeared first on NVIDIA Blog.
-
Leading a movement to strengthen machine learning in Africaby DeepMind Blog on June 23, 2022
-
How AI creates photorealistic images from textby (AI) on June 22, 2022
Have you ever seen a puppy in a nest emerging from a cracked egg? What about a photo that’s overlooking a steampunk city with airships? Or a picture of two robots having a romantic evening at the movies? These might sound far-fetched, but a novel type of machine learning technology called text-to-image generation makes them possible. These models can generate high-quality, photorealistic images from a simple text prompt.Within Google Research, our scientists and engineers […]
-
Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demoby Angie Lee (NVIDIA Blog) on June 22, 2022
A camera begins in the sky, flies through some trees and smoothly exits the forest, all while precisely tracking a car driving down a dirt path. This would be all but impossible in the real world, according to film and photography director Brett Danton. The post Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demo appeared first on NVIDIA Blog.
-
Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUsby Clarissa Eyu (NVIDIA Blog) on June 22, 2022
It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor. But professors Artem Cherkasov and Olexandr Isayev were surprised to find that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug Read article > The post Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery […]
-
Swin Transformer supports 3-billion-parameter vision models that can train with higher-resolution images for greater task applicabilityby Alyssa Hughes (Microsoft Research) on June 21, 2022
Early last year, our research team from the Visual Computing Group introduced Swin Transformer, a Transformer-based general-purpose computer vision architecture that for the first time beat convolutional neural networks on the important vision benchmark of COCO object detection and did so by a large margin. Convolutional neural networks (CNNs) have long been the architecture of The post Swin Transformer supports 3-billion-parameter vision models that can train with […]
-
AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objectsby Isha Salian (NVIDIA Blog) on June 21, 2022
Jazz is all about improvisation — and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session. The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to Read article > The post AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects appeared first on NVIDIA Blog.
-
NVIDIA Joins Forum to Help Lay the Foundation of the Metaverseby Rev Lebaredian (NVIDIA Blog) on June 21, 2022
The metaverse is the next big step in the evolution of the internet — the 3D web — which presents a major opportunity for every industry from entertainment to automotive to manufacturing, robotics and beyond. That’s why NVIDIA is joining our partners in the Metaverse Standards Forum, an open venue for all interested parties to Read article > The post NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse appeared first on NVIDIA Blog.
-
3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’by Stanley Tack (NVIDIA Blog) on June 21, 2022
3D artist Jae Solina, who goes by the stage name JSFILMZ, steps In the NVIDIA Studio this week to share his unique 3D creative workflow in the making of Cyberpunk Short Film — a story shrouded in mystery with a tense exchange between two secretive contacts. The post 3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.
-
NVIDIA Accelerates Open Data Center Innovationby Ami Badani (NVIDIA Blog) on June 21, 2022
NVIDIA today became a founding member of the Linux Foundation’s Open Programmable Infrastructure (OPI) project, while making its NVIDIA DOCA networking software APIs widely available to foster innovation in the data center. Businesses are embracing open data centers, which require applications and services that are easily integrated with other solutions for simplified, lower-cost and sustainable Read article > The post NVIDIA Accelerates Open Data Center Innovation […]
-
The King’s Swedish: AI Rewrites the Book in Scandinaviaby Fredric Wall (NVIDIA Blog) on June 20, 2022
If the King of Sweden wants help drafting his annual Christmas speech this year, he could ask the same AI model that’s available to his 10 million subjects. As a test, researchers prompted the model, called GPT-SW3, to draft one of the royal messages, and it did a pretty good job, according to Magnus Sahlgren, Read article > The post The King’s Swedish: AI Rewrites the Book in Scandinavia appeared first on NVIDIA Blog.
-
Announcing the winners of the 2022 Silent Data Corruptions at Scale request for proposalsby Meta Research on June 17, 2022
Announcing the winners of the 2022 Silent Data Corruptions at Scale request for proposals
-
Meta PhD Fellow Spotlight: Helping the Human Condition with Virtual Realityby Meta Research on June 16, 2022
Meta Research PhD Fellow Spotlight: Helping the human condition with virtual reality
-
WhatsApp announces first research award opportunity on privacy-aware program analysisby Meta Research on June 15, 2022
WhatsApp announces first research award opportunity on privacy-aware program analysis
-
Bridging DeepMind research with Alphabet productsby DeepMind Blog on June 15, 2022
Today we caught up with Gemma Jennings, a product manager on the Applied team, who led a session on vision language models at the AI Summit, one of the world’s largest AI events for business.
-
AI-Written Critiques Help Humans Notice Flawsby Jan Leike (OpenAI) on June 13, 2022
Showing model-generated critical comments to humans helps them find flaws in summaries.
-
ICLR 2022 highlights from Microsoft Research Asia: Expanding the horizon of machine learning techniques and applicationsby Alyssa Hughes (Microsoft Research) on June 13, 2022
ICLR (International Conference on Learning Representations) is recognized as one of the top conferences in the field of deep learning. Many influential papers on artificial intelligence, statistics, and data science—as well as important application fields such as machine vision, speech recognition, and text understanding—have been published and presented at this conference. The following selection of The post ICLR 2022 highlights from Microsoft Research Asia: Expanding […]
-
Research highlights from the Core Data Science team at Metaby Meta Research on June 10, 2022
Core Data Science (CDS) is a central science organization that drives impact for Meta and the world through use-inspired advancements to our fundamental understanding of the intern...
-
Techniques for Training Large Neural Networksby Lilian Weng (OpenAI) on June 9, 2022
Large neural networks are at the core of many recent advances in AI, but training them is a difficult engineering and research challenge which requires orchestrating a cluster of GPUs to perform a single synchronized calculation. As cluster and model sizes have grown, machine learning practitioners have developed an increasing
-
Building a more helpful browser with machine learningby (AI) on June 9, 2022
At Google we use technologies like machine learning (ML) to build more useful products — from filtering out email spam, to keeping maps up to date, to offering more relevant search results. Chrome is no exception: We use ML to make web images more accessible to people who are blind or have low vision, and we also generate real-time captions for online videos, in service of people in noisy environments, and those who are hard of hearing.This work in Chrome continues, so we […]
-
What research at Meta FinTech looks like: Q&A with Research Scientist Shaz Qadeerby Meta Research on June 8, 2022
PhDs come to work at Meta Research from a wide variety of disciplines related to computer science and engineering. Research Scientist Shaz Qadeer joined Meta to work on blockchain [...]
-
Best Practices for Deploying Language Modelsby OpenAI (OpenAI) on June 2, 2022
Cohere, OpenAI, and AI21 Labs have developed a preliminary set of best practices applicable to any organization developing or deploying large language models. Computers that can read and write are here, and they have the potential to fundamentally impact daily life. The future of human–machine interaction is full
-
Advocating for the LGBTQ+ community in AI researchby DeepMind Blog on June 1, 2022
Research scientist, Kevin McKee, tells how his early love of science fiction and social psychology inspired his career, and how he’s helping advance research in ‘queer fairness’, support human-AI collaboration, and study the effects of AI on the LGBTQ+ community.
-
DoWhy evolves to independent PyWhy model to help causal inference growby Alyssa Hughes (Microsoft Research) on May 31, 2022
Identifying causal effects is an integral part of scientific inquiry. It helps us understand everything from educational outcomes to the effects of social policies to risk factors for diseases. Questions of cause-and-effect are also critical for the design and data-driven evaluation of many technological systems we build today. To help data scientists better understand and The post DoWhy evolves to independent PyWhy model to help causal inference grow appeared first on […]
-
Kyrgyzstan to King’s Cross: the star baker cooking up codeby DeepMind Blog on May 26, 2022
My day can vary, it really depends on which phase of the project I'm on. Let’s say we want to add a feature to our product – my tasks could range from designing solutions and working with the team to find the best one, to deploying new features into production and doing maintenance. Along the way, I’ll communicate changes to our stakeholders, write docs, code and test solutions, build analytics dashboards, clean-up old code, and fix bugs.
-
Powering Next Generation Applications with OpenAI Codexby OpenAI (OpenAI) on May 24, 2022
Codex is now powering 70 different applications across a variety of use cases through the OpenAI API.
-
Building a culture of pioneering responsiblyby DeepMind Blog on May 24, 2022
When I joined DeepMind as COO, I did so in large part because I could tell that the founders and team had the same focus on positive social impact. In fact, at DeepMind, we now champion a term that perfectly captures my own values and hopes for integrating technology into people’s daily lives: pioneering responsibly. I believe pioneering responsibly should be a priority for anyone working in tech. But I also recognise that it’s especially important when it comes to […]
-
(De)ToxiGen: Leveraging large language models to build more robust hate speech detection toolsby Alyssa Hughes (Microsoft Research) on May 23, 2022
It’s a well-known challenge that large language models (LLMs)—growing in popularity thanks to their adaptability across a variety of applications—carry risks. Because they’re trained on large amounts of data from across the internet, they’re capable of generating inappropriate and harmful language based on similar language encountered during training. Content moderation tools can be deployed to The post (De)ToxiGen: Leveraging large language models to build […]
-
Partnering people with large language models to find and fix bugs in NLP systemsby Alyssa Hughes (Microsoft Research) on May 23, 2022
Advances in platform models—large-scale models that can serve as foundations across applications—have significantly improved the ability of computers to process natural language. But natural language processing (NLP) models are still far from perfect, sometimes failing in embarrassing ways, like translating “Eu não recomendo este prato” (I don’t recommend this dish) in Portuguese to “I highly recommend this dish” in English (a real example from a top […]
-
Open-sourcing MuJoCoby DeepMind Blog on May 23, 2022
In October 2021, we announced that we acquired the MuJoCo physics simulator, and made it freely available for everyone to support research everywhere. We also committed to developing and maintaining MuJoCo as a free, open-source, community-driven project with best-in-class capabilities. Today, we’re thrilled to report that open sourcing is complete and the entire codebase is on GitHub! Here, we explain why MuJoCo is a great platform for open-source collaboration and share […]
-
How we build with and for people with disabilitiesby (AI) on May 19, 2022
Editor’s note: Today is Global Accessibility Awareness Day. We’re also sharing how we’re making education more accessibleand launching a newAndroid accessibility feature.Over the past nine years, my job has focused on building accessible products and supporting Googlers with disabilities. Along the way, I’ve been constantly reminded of how vast and diverse the disability community is, and how important it is to continue working alongside this community to build […]
-
From LEGO competitions to DeepMind's robotics labby DeepMind Blog on May 19, 2022
If you want to be at DeepMind, go for it. Apply, interview, and just try. You might not get it the first time but that doesn’t mean you can’t try again. I never thought DeepMind would accept me, and when they did, I thought it was a mistake. Everyone doubts themselves – I’ve never felt like the smartest person in the room. I’ve often felt the opposite. But I’ve learned that, despite those feelings, I do belong and I do deserve to work at a place like this. And […]
-
DALL·E 2 Research Preview Updateby Joanne Jang (OpenAI) on May 18, 2022
Early users have created over 3 million images to date and helped us improve our safety processes. We're excited to begin adding up to 1,000 new users from our waitlist each week.
-
FLUTE: A scalable federated learning simulation platformby Alyssa Hughes (Microsoft Research) on May 16, 2022
Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However, despite this flexibility and the amount of research already conducted, it’s difficult to implement due The post FLUTE: A scalable federated learning simulation platform appeared first on Microsoft Research.
-
Q&A with UIUC professor Lav Varshney, the AI expert behind sustainable concrete collaboration with Metaby Meta Research on May 13, 2022
For May, we nominated Lav Varshney, an associate professor in the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory at University of [...]
-
Improving skin tone representation across Googleby (AI) on May 11, 2022
Seeing yourself reflected in the world around you — in real life, media or online — is so important. And we know that challenges with image-based technologies and representation on the web have historically left people of color feeling overlooked and misrepresented. Last year, we announced Real Tone for Pixel, which is just one example of our efforts to improve representation of diverse skin tones across Google products.Today, we're introducing a next step in our […]
-
Google Translate learns 24 new languagesby (AI) on May 11, 2022
For years, Google Translate has helped break down language barriers and connect communities all over the world. And we want to make this possible for even more people — especially those whose languages aren’t represented in most technology. So today we’ve added 24 languages to Translate, now supporting a total of 133 used around the globe.Over 300 million people speak these newly added languages — like Mizo, used by around 800,000 people in the far northeast of […]
-
Google I/O 2022: Advancing knowledge and computingby (AI) on May 11, 2022
[TL;DR]Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and make it universally accessible and useful. In the decades since, we’ve been developing our technology to deliver on that mission.The progress we've made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all. And once a year — on my favorite day of the year 🙂 […]
-
Understanding the world through languageby (AI) on May 11, 2022
Language is at the heart of how people communicate with each other. It’s also proving to be powerful in advancing AI and building helpful experiences for people worldwide.From the beginning, we set out to connect words in your search to words on a page so we could make the web’s information more accessible and useful. Over 20 years later, as the web changes, and the ways people consume information expand from text to images to videos and more — the one constant is that […]
-
Immersive view coming soon to Maps — plus more updatesby (AI) on May 11, 2022
Google Maps helps over one billion people navigate and explore. And over the past few years, our investments in AI have supercharged the ability to bring you the most helpful information about the real world, including when a business is open and how crowded your bus is. Today at Google I/O, we announced new ways the latest advancements in AI are transforming Google Maps — helping you explore with an all-new immersive view of the world, find the most fuel-efficient route, […]
-
A closer look at the research to help AI see more skin tonesby (AI) on May 11, 2022
Today at I/O we released the Monk Skin Tone (MST) Scale in partnership with Harvard professor and sociologist Dr. Ellis Monk. The MST Scale, developed by Dr. Monk, is a 10-shade scale designed to be more inclusive of the spectrum of skin tones in our society. We’ll be incorporating the MST Scale into various Google products over the coming months, and we are openly releasing the scale so that anyone can use it for research and product development.The MST Scale is an […]
-
OpenAI Leadership Team Updateby Sam Altman (OpenAI) on May 5, 2022
We’re happy to announce several executive role changes that reflect our recent progress and will ensure continued momentum toward our next major milestones.
-
Azure Quantum innovation: Efficient error correction of topological qubits with Floquet codesby Alyssa Hughes (Microsoft Research) on May 5, 2022
Technological innovation that enables scaling of quantum computing underpins the Microsoft Azure Quantum program. In March of this year, we announced our demonstration of the underlying physics required to create a topological qubit—qubits that are theorized to be inherently more stable than existing ones without sacrificing size or speed. However, our quest to deliver a The post Azure Quantum innovation: Efficient error correction of topological qubits with Floquet codes […]
-
Tackling multiple tasks with a single visual language modelby DeepMind Blog on April 28, 2022
We introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks.
-
When a passion for bass and brass help build better toolsby DeepMind Blog on April 28, 2022
We caught up with Kevin Millikin, a software engineer on the DevTools team. He’s in Salt Lake City this week to present at PyCon US, the largest annual gathering for those using and developing the open-source Python programming language.
-
MoLeR: Creating a path to more efficient drug designby Alyssa Hughes (Microsoft Research) on April 27, 2022
Drug discovery has come a long way from its roots in serendipity. It is now an increasingly rational process, in which one important phase, called lead optimization, is the stepwise search for promising drug candidate compounds in the lab. In this phase, expert medicinal chemists work to improve “hit” molecules—compounds that demonstrate some promising properties, The post MoLeR: Creating a path to more efficient drug design appeared first on Microsoft Research.
-
Reality Labs Chief Scientist gives talk on augmented reality at the 2021 IEEE International Electron Devices Meetingby Meta Research on April 25, 2022
Reality Labs Chief Scientist gives talk on augmented reality at the 2021 IEEE International Electron Devices Meeting.
-
DeepMind’s latest research at ICLR 2022by DeepMind Blog on April 25, 2022
Beyond supporting the event as sponsors and regular workshop organisers, our research teams are presenting 29 papers, including 10 collaborations this year. Here’s a brief glimpse into our upcoming oral, spotlight, and poster presentations.
-
New lessons learned in building COVID-19 vaccine acceptanceby Meta Research on April 22, 2022
A review of campaign partnerships between Data for Good at Meta, UNICEF, the Yale Institute for Global Health, and the Public Good Projects
-
Measuring Goodhart’s Lawby Jacob Hilton (OpenAI) on April 13, 2022
Goodhart’s law famously says: “When a measure becomes a target, it ceases to be a good measure.” Although originally from economics, it’s something we have to grapple with at OpenAI when figuring out how to optimize objectives that are difficult or costly to measure.
-
Investing in Eastern Europe’s AI futureby (AI) on April 11, 2022
It was an honor and a privilege to attend a special event in the Bulgarian capital, Sofia, today to launch INSAIT, the Institute for Computer Science, Artificial Intelligence and Technology. INSAIT is a new AI and computer science research institute that will provide truly world-class facilities.It’s fantastic to see the country where I was born leading the charge in bridging Eastern Europe to the world-stage in computer science research.The institute is modeled on the […]
-
How AI and imagery build a self-updating mapby (AI) on April 7, 2022
Building a map is complex, and keeping it up-to-date is even more challenging. Think about how often your city, town or neighborhood changes on a day-to-day basis. Businesses and shops open and close, stretches of highway are added, and roadways change. In today’s Maps 101 installment, we’ll dive into two ways Google Maps uses advancements in AI and imagery to help you see the latest information about the world around you every single day.Automatically updating business […]
-
DALL·E 2by OpenAI (OpenAI) on April 6, 2022
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.
-
“Lift as you lead”: Meet 2 women defining responsible AIby (AI) on March 29, 2022
At Google, Marian Croak’s technical research team, The Center for Responsible AI and Human-Centered Technology, and Jen Gennai’s operations and governance team, Responsible Innovation, collaborate often on creating a fairer future for AI systems.The teams complement each other to support computer scientists, UX researchers and designers, product managers and subject matter experts in the social sciences, human rights and civil rights. Collectively, their teams include […]
-
Go with the flow state: What music and AI have in commonby (AI) on March 29, 2022
Carrie Cai, Ben Zevenbergen and Johnny Soraker all work on developing artificial intelligence (AI) responsibly at Google, in the larger research community and across the technology industry. Carrie is a research scientist focusing on human-AI interaction, Ben is an ethicist and policy advisor and Johnny is an AI Principles ethicist. They all work within a global team of experts from a variety of fields, including the social sciences and humanities, focused on the ethical […]
-
The Check Up: our latest health AI developmentsby (AI) on March 24, 2022
Over the years, teams across Google have focused on how technology — specifically artificial intelligence and hardware innovations — can improve access to high-quality, equitable healthcare across the globe.Accessing the right healthcare can be challenging depending on where people live and whether local caregivers have specialized equipment or training for tasks like disease screening. To help, Google Health has expanded its research and applications to focus on […]
-
Meet 3 women who test Google products for fairnessby (AI) on March 17, 2022
One of the most interesting parts of working at Google is learning what other people do here — it’s not uncommon to come across a job title you’ve never heard of. For example: ProFair Program Manager, or ProFair Analyst.These roles are part of our Responsible Innovation team, which focuses on making sure our tech supports Google’s AI Principles. One way the team does this is by conducting proactive algorithmic product fairness — or ProFair — testing. This means […]
-
The Google.org grantee using AI to detect bushfire risksby (AI) on March 16, 2022
From predicting floods to improving waste management, organizations and researchers across Asia Pacific are using technology to respond to the impact of climate change.Supporting this important work is a priority for Google.org. At today’s Southeast Asia Development Symposium, we announced a $6 million Sustainability Seed Fund to help organizations dedicated to addressing some of the region’s most difficult sustainability challenges. We look forward to sharing more in […]
-
New GPT-3 Capabilities: Edit & Insertby Mohammad Bavarian (OpenAI) on March 15, 2022
We’ve released new versions of GPT-3 and Codex which can edit or insert content into existing text, rather than just completing existing text. These new capabilities make it practical to use the OpenAI API to revise existing content, such as rewriting a paragraph of text or refactoring code.
-
Predicting the past with Ithacaby DeepMind Blog on March 9, 2022
The birth of human writing marked the dawn of History and is crucial to our understanding of past civilisations and the world we live in today. For example, more than 2,500 years ago, the Greeks began writing on stone, pottery, and metal to document everything from leases and laws to calendars and oracles, giving a detailed insight into the Mediterranean region. Unfortunately, it’s an incomplete record. Many of the surviving inscriptions have been damaged over the […]
-
Lessons Learned on Language Model Safety and Misuseby Miles Brundage (OpenAI) on March 3, 2022
The deployment of powerful AI systems has enriched our understanding of safety and misuse far more than would have been possible through research alone. Notably: API-based language model misuse often comes in different forms than we feared most. We have identified limitations in existing language model evaluations that we are
-
Economic Impacts Research at OpenAIby Sam Manning (OpenAI) on March 3, 2022
Call for expressions of interest to study the economic impacts of Codex.
-
Machine learning can help read the language of lifeby (AI) on March 2, 2022
DNA is the language of life: our DNA forms a living record of things that went well for our ancestors, and things that didn’t. DNA tells our body (and every other organism) which proteins to produce; these proteins are tiny machines that carry out enormous tasks, from fighting off infection to helping you ace an upcoming exam in school.But for about a third of all proteins that all organisms produce, we just don’t know what they do. It’s kind of like we’re in a […]
-
An intro to AI, made for studentsby (AI) on February 22, 2022
Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google […]
-
Accelerating fusion science through learned plasma controlby DeepMind Blog on February 16, 2022
Successfully controlling the nuclear fusion plasma in a tokamak with deep reinforcement learning
-
MuZero’s first step from research into the real worldby DeepMind Blog on February 11, 2022
Collaborating with YouTube to optimise video compression in the open source VP9 codec.
-
Solving (Some) Formal Math Olympiad Problemsby Stanislas Polu (OpenAI) on February 2, 2022
We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.[1] The prover uses a language model to find proofs of formal statements. Each
-
Competitive programming with AlphaCodeby DeepMind Blog on February 2, 2022
Solving novel problems and setting a new milestone in competitive programming.
-
Aligning Language Models to Follow Instructionsby Ryan Lowe (OpenAI) on January 27, 2022
We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which are trained with humans in the loop, are now deployed as the default language
-
DeepMind: The Podcast returns for Season 2by DeepMind Blog on January 25, 2022
We believe artificial intelligence (AI) is one of the most significant technologies of our age and we want to help people understand its potential and how it’s being created.
-
Simulating matter on the quantum scale with AIby DeepMind Blog on December 9, 2021
Solving some of the major challenges of the 21st Century, such as producing clean electricity or developing high temperature superconductors, will require us to design new materials with specific properties. To do this on a computer requires the simulation of electrons, the subatomic particles that govern how atoms bond to form molecules and are also responsible for the flow of electricity in solids.
-
Language modelling at scale: Gopher, ethical considerations, and retrievalby DeepMind Blog on December 8, 2021
Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans.
-
Exploring the beauty of pure mathematics in novel waysby DeepMind Blog on December 1, 2021
More than a century ago, Srinivasa Ramanujan shocked the mathematical world with his extraordinary ability to see remarkable patterns in numbers that no one else could see. The self-taught mathematician from India described his insights as deeply intuitive and spiritual, and patterns often came to him in vivid dreams.
-
Real-world challenges for AGIby DeepMind Blog on November 2, 2021
When people picture a world with artificial general intelligence (AGI), robots are more likely to come to mind than enabling solutions to society’s most intractable problems. But I believe the latter is much closer to the truth. AI is already enabling huge leaps in tackling fundamental challenges: from solving protein folding to predicting accurate weather patterns, scientists are increasingly using AI to deduce the rules and principles that underpin highly complex […]
-
Opening up a physics simulator for roboticsby DeepMind Blog on October 18, 2021
When you walk, your feet make contact with the ground. When you write, your fingers make contact with the pen. Physical contacts are what makes interaction with the world possible. Yet, for such a common occurrence, contact is a surprisingly complex phenomenon. Taking place at microscopic scales at the interface of two bodies, contacts can be soft or stiff, bouncy or spongy, slippery or sticky. It’s no wonder our fingertips have four different types of touch-sensors. This […]
-
Stacking our way to more general robotsby DeepMind Blog on October 11, 2021
Picking up a stick and balancing it atop a log or stacking a pebble on a stone may seem like simple — and quite similar — actions for a person. However, most robots struggle with handling more than one such task at a time. Manipulating a stick requires a different set of behaviours than stacking stones, never mind piling various dishes on top of one another or assembling furniture. Before we can teach robots how to perform these kinds of tasks, they first need to learn […]
-
Predicting gene expression with AIby DeepMind Blog on October 4, 2021
When the Human Genome Project succeeded in mapping the DNA sequence of the human genome, the international research community were excited by the opportunity to better understand the genetic instructions that influence human health and development. DNA carries the genetic information that determines everything from eye colour to susceptibility to certain diseases and disorders. The roughly 20,000 sections of DNA in the human body known as genes contain instructions about the […]
-
Nowcasting the next hour of rainby DeepMind Blog on September 29, 2021
Our lives are dependent on the weather. At any moment in the UK, according to one study, one third of the country has talked about the weather in the past hour, reflecting the importance of weather in daily life. Amongst weather phenomena, rain is especially important because of its influence on our everyday decisions. Should I take an umbrella? How should we route vehicles experiencing heavy rain? What safety measures do we take for outdoor events? Will there be a flood? […]
-
Building architectures that can handle the world’s databy DeepMind Blog on August 3, 2021
Most architectures used by AI systems today are specialists. A 2D residual network may be a good choice for processing images, but at best it’s a loose fit for other kinds of data — such as the Lidar signals used in self-driving cars or the torques used in robotics. What’s more, standard architectures are often designed with only one task in mind, often leading engineers to bend over backwards to reshape, distort, or otherwise modify their inputs and outputs in hopes […]
-
Generally capable agents emerge from open-ended playby DeepMind Blog on July 27, 2021
In recent years, artificial intelligence agents have succeeded in a range of complex game environments. For instance, AlphaZero beat world-champion programs in chess, shogi, and Go after starting out with knowing no more than the basic rules of how to play. Through reinforcement learning (RL), this single system learnt by playing round after round of games through a repetitive process of trial and error. But AlphaZero still trained separately on each game — unable to […]
-
Putting the power of AlphaFold into the world’s handsby DeepMind Blog on July 22, 2021
When we announced AlphaFold 2 last December, it was hailed as a solution to the 50-year old protein folding problem. Last week, we published the scientific paper and source code explaining how we created this highly innovative system, and today we’re sharing high-quality predictions for the shape of every single protein in the human body, as well as for the proteins of 20 additional organisms that scientists rely on for their research.
-
Elastic Distributed Training with XGBoost on Rayby Michael Mui (Machine Learning Archives - Uber Engineering Blog) on July 7, 2021
Introduction Since we productionized distributed XGBoost on Apache Spark™ at Uber in 2017, XGBoost has powered a wide spectrum of machine learning (ML) use cases at Uber, spanning from optimizing marketplace dynamic pricing policies for Freight, improving times of … The post Elastic Distributed Training with XGBoost on Ray appeared first on Uber Engineering Blog.
-
An update on our racial justice effortsby DeepMind Blog on June 4, 2021
In June 2020, after George Floyd was killed in Minneapolis (USA) and the solidarity that followed as millions spoke out at Black Lives Matter protests around the world, I – like many others – reflected on the situation and how our organisation could contribute. I then shared some thoughts around DeepMind's intention to help combat racism and advance racial equity.
-
Advancing sports analytics through AI researchby DeepMind Blog on May 7, 2021
Creating testing environments to help progress AI research out of the lab and into the real world is immensely challenging. Given AI’s long association with games, it is perhaps no surprise that sports presents an exciting opportunity, offering researchers a testbed in which an AI-enabled system can assist humans in making complex, real-time decisions in a multiagent environment with dozens of dynamic, interacting individuals.
-
Game theory as an engine for large-scale data analysisby DeepMind Blog on May 6, 2021
Modern AI systems approach tasks like recognising objects in images and predicting the 3D structure of proteins as a diligent student would prepare for an exam. By training on many example problems, they minimise their mistakes over time until they achieve success. But this is a solitary endeavour and only one of the known forms of learning. Learning also takes place by interacting and playing with others. It’s rare that a single individual can solve extremely complex […]
-
MuZero: Mastering Go, chess, shogi and Atari without rulesby DeepMind Blog on December 23, 2020
In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go. Two years later, its successor - AlphaZero - learned from scratch to master Go, chess and shogi. Now, in a paper in the journal Nature, we describe MuZero, a significant step forward in the pursuit of general-purpose algorithms. MuZero masters Go, chess, shogi and Atari without needing to be told the rules, thanks to its ability to plan winning […]
-
Using JAX to accelerate our researchby DeepMind Blog on December 4, 2020
DeepMind engineers accelerate our research by building tools, scaling up algorithms, and creating challenging virtual and physical worlds for training and testing artificial intelligence (AI) systems. As part of this work, we constantly evaluate new machine learning libraries and frameworks.
-
AlphaFold: a solution to a 50-year-old grand challenge in biologyby DeepMind Blog on November 30, 2020
Proteins are essential to life, supporting practically all its functions. They are large complex molecules, made up of chains of amino acids, and what a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a […]
-
Breaking down global barriers to accessby DeepMind Blog on November 5, 2020
This week, we welcomed our biggest and most geographically diverse cohort of DeepMind scholars yet. We’re excited to reflect on the journey so far, share more on the next chapter of the DeepMind scholarships – and welcome many more universities from around the world into the programme.
-
FermiNet: Quantum Physics and Chemistry from First Principlesby DeepMind Blog on October 19, 2020
In an article recently published in Physical Review Research, we show how deep learning can help solve the fundamental equations of quantum mechanics for real-world systems. Not only is this an important fundamental scientific question, but it also could lead to practical uses in the future, allowing researchers to prototype new materials and chemical syntheses in silico before trying to make them in the lab. Today we are also releasing the code from this study so that the […]
-
Fast reinforcement learning through the composition of behavioursby DeepMind Blog on October 12, 2020
Imagine if you had to learn how to chop, peel and stir all over again every time you wanted to learn a new recipe. In many machine learning systems, agents often have to learn entirely from scratch when faced with new challenges. It’s clear, however, that people learn more efficiently than this: they can combine abilities previously learned. In the same way that a finite dictionary of words can be reassembled into sentences of near infinite meanings, people repurpose and […]
-
Traffic prediction with advanced Graph Neural Networksby DeepMind Blog on September 3, 2020
By partnering with Google, DeepMind is able to bring the benefits of AI to billions of people all over the world. From reuniting a speech-impaired user with his original voice, to helping users discover personalised apps, we can apply breakthrough research to immediate real-world problems at a Google scale. Today we’re delighted to share the results of our latest partnership, delivering a truly global impact for the more than one billion people that use Google Maps.
-
Fiber: Distributed Computing for AI Made Simpleby Jiale Zhi (Machine Learning Archives - Uber Engineering Blog) on June 30, 2020
Project Homepage: GitHub Over the past several years, increasing processing power of computing machines has led to an increase in machine learning advances. More and more, algorithms exploit parallelism and rely on distributed training to process an enormous amount of … The post Fiber: Distributed Computing for AI Made Simple appeared first on Uber Engineering Blog.
-
Applying for technical rolesby DeepMind Blog on June 23, 2020
It’s no secret that the gender gap still exists within STEM. Despite a slight increase in recent years, studies show that women only make up about a quarter of the overall STEM workforce in the UK. While the reasons vary, many women report feeling held back by a lack of representation, clear opportunities and information on what working in the sector actually involves.
-
Profiles in Coding: Diana Yanakiev, Uber ATG, Pittsburghby Bea Schuster (Machine Learning Archives - Uber Engineering Blog) on June 16, 2020
Self-driving cars have long been considered the future of transportation, but they’re becoming more present everyday. Uber ATG (Advanced Technologies Group) is at the forefront of this technology, helping bring safe, reliable self-driving vehicles to the streets. Of course, … The post Profiles in Coding: Diana Yanakiev, Uber ATG, Pittsburgh appeared first on Uber Engineering Blog.
-
Introducing Neuropod, Uber ATG’s Open Source Deep Learning Inference Engineby Vivek Panyam (Machine Learning Archives - Uber Engineering Blog) on June 8, 2020
At Uber Advanced Technologies Group (ATG), we leverage deep learning to provide safe and reliable self-driving technology. Using deep learning, we can build and train models to handle tasks such as processing sensor input, identifying objects, and predicting where … The post Introducing Neuropod, Uber ATG’s Open Source Deep Learning Inference Engine appeared first on Uber Engineering Blog.
-
Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learningby Steffon Davis (Machine Learning Archives - Uber Engineering Blog) on June 2, 2020
How did the pedestrian cross the road? Contrary to popular belief, sometimes the answer isn’t as simple as “to get to the other side.” To bring safe, reliable self-driving vehicles (SDVs) to the streets at Uber Advanced Technologies Group (ATG)… The post Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learning appeared first on Uber Engineering Blog.
-
Meta-Graph: Few-Shot Link Prediction Using Meta-Learningby Ankit Jain (Machine Learning Archives - Uber Engineering Blog) on May 29, 2020
This article is based on the paper “Meta-Graph: Few Shot Link Prediction via Meta Learning” by Joey Bose, Ankit Jain, Piero Molino, and William L. Hamilton Many real-world data sets are structured as graphs, and as such, machine … The post Meta-Graph: Few-Shot Link Prediction Using Meta-Learning appeared first on Uber Engineering Blog.
-
Using AI to predict retinal disease progressionby DeepMind Blog on May 18, 2020
Vision loss among the elderly is a major healthcare issue: about one in three people have some vision-reducing disease by the age of 65. Age-related macular degeneration (AMD) is the most common cause of blindness in the developed world. In Europe, approximately 25% of those 60 and older have AMD. The ‘dry’ form is relatively common among people over 65, and usually causes only mild sight loss. However, about 15% of patients with dry AMD go on to develop a more serious […]
-
Specification gaming: the flip side of AI ingenuityby DeepMind Blog on April 21, 2020
Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework […]
-
Towards understanding glasses with graph neural networksby DeepMind Blog on April 6, 2020
Under a microscope, a pane of window glass doesn’t look like a collection of orderly molecules, as a crystal would, but rather a jumble with no discernable structure. Glass is made by starting with a glowing mixture of high-temperature melted sand and minerals. Once cooled, its viscosity (a measure of the friction in the fluid) increases a trillion-fold, and it becomes a solid, resisting tension from stretching or pulling. Yet the molecules in the glass remain in a […]
-
Agent57: Outperforming the human Atari benchmarkby DeepMind Blog on March 31, 2020
The Atari57 suite of games is a long-standing benchmark to gauge agent performance across a wide range of tasks. We’ve developed Agent57, the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games. Agent57 combines an algorithm for efficient exploration with a meta-controller that adapts the exploration and long vs. short-term behaviour of the agent.
-
Under the Hood of Uber ATG’s Machine Learning Infrastructure and Versioning Control Platform for Self-Driving Vehiclesby Yu Guo (Machine Learning Archives - Uber Engineering Blog) on March 4, 2020
As Uber experienced exponential growth over the last few years, now supporting 14 million trips each day, our engineers proved they could build for scale. That value extends to other areas, including Uber ATG (Advanced Technologies Group) and its quest … The post Under the Hood of Uber ATG’s Machine Learning Infrastructure and Versioning Control Platform for Self-Driving Vehicles appeared first on Uber Engineering Blog.
-
Building a Backtesting Service to Measure Model Performance at Uber-scaleby Sam Xiao (Machine Learning Archives - Uber Engineering Blog) on February 13, 2020
With operations in over 700 cities worldwide and gross bookings of over $16 billion in Q3 2019 alone, Uber leverages forecast models to ensure accurate financial planning and budget management. These models, derived from data science practices and platformed for … The post Building a Backtesting Service to Measure Model Performance at Uber-scale appeared first on Uber Engineering Blog.
-
A new model and dataset for long-range memoryby DeepMind Blog on February 10, 2020
Throughout our lives, we build up memories that are retained over a diverse array of timescales, from minutes to months to years to decades. When reading a book, we can recall characters who were introduced many chapters ago, or in an earlier book in a series, and reason about their motivations and likely actions in the current context. We can even put the book down during a busy week, and pick up from where we left off without forgetting the plotline.
-
Women in Data Science at Uber: Moving the World With Data in 2020—and Beyondby Emily Bailey (Machine Learning Archives - Uber Engineering Blog) on January 28, 2020
Uber is a company built on data science. We leverage map data to get users from point A to point B; speech and text data to communicate between riders and drivers; restaurant and dish data to recommend food … The post Women in Data Science at Uber: Moving the World With Data in 2020—and Beyond appeared first on Uber Engineering Blog.
-
AlphaFold: Using AI for scientific discoveryby DeepMind Blog on January 15, 2020
In our study published in Nature, we demonstrate how artificial intelligence research can drive and accelerate new scientific discoveries. We’ve built a dedicated, interdisciplinary team in hopes of using AI to push basic research forward: bringing together experts from the fields of structural biology, physics, and machine learning to apply cutting-edge techniques to predict the 3D structure of a protein based solely on its genetic sequence.
-
Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AIby DeepMind Blog on January 15, 2020
Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov's seminal psychological work. In his most famous experiment, dogs were trained to expect food […]
-
Open Sourcing Manifold, a Visual Debugging Tool for Machine Learningby Lezhi Li (Machine Learning Archives - Uber Engineering Blog) on January 7, 2020
In January 2019, Uber introduced Manifold, a model-agnostic visual debugging tool for machine learning that we use to identify issues in our ML models. To give other ML practitioners the benefits of this tool, today we are excited to … The post Open Sourcing Manifold, a Visual Debugging Tool for Machine Learning appeared first on Uber Engineering Blog.
-
Uber Visualization Highlights: Displaying City Street Speed Clusters with SpeedsUpby Bryant Luong (Machine Learning Archives - Uber Engineering Blog) on January 2, 2020
Uber’s Data Visualization team builds software that enables us to better understand how cities move through dynamic visualizations. The Uber Engineering Blog periodically highlights visualizations that showcase how these technologies can turn aggregated data into actionable insights. For SpeedsUp, … The post Uber Visualization Highlights: Displaying City Street Speed Clusters with SpeedsUp appeared first on Uber Engineering Blog.
-
Uber AI in 2019: Advancing Mobility with Artificial Intelligenceby Zoubin Ghahramani (Machine Learning Archives - Uber Engineering Blog) on December 18, 2019
Artificial intelligence powers many of the technologies and services underpinning Uber’s platform, allowing engineering and data science teams to make informed decisions that help improve user experiences for products across our lines of business. At the forefront of this effort … The post Uber AI in 2019: Advancing Mobility with Artificial Intelligence appeared first on Uber Engineering Blog.
-
Using WaveNet technology to reunite speech-impaired users with their original voicesby DeepMind Blog on December 18, 2019
As a teenager, Tim Shaw put everything he had into football practice: his dream was to join the NFL. After playing for Penn State in college, his ambitions were finally realised: the Carolina Panthers drafted him at age 23, and he went on to play for the Chicago Bears and Tennessee Titans, where he broke records as a linebacker. After six years in the NFL, on the cusp of greatness, his performance began to falter. He couldn’t tackle like he once had; his arms slid off the […]
-
Learning human objectives by evaluating hypothetical behavioursby DeepMind Blog on December 13, 2019
When we train reinforcement learning (RL) agents in the real world, we don’t want them to explore unsafe states, such as driving a mobile robot into a ditch or writing an embarrassing email to one’s boss. Training RL agents in the presence of unsafe states is known as the safe exploration problem. We tackle the hardest version of this problem, in which the agent initially doesn’t know how the environment works or where the unsafe states are. The agent has one source of […]
-
Productionizing Distributed XGBoost to Train Deep Tree Models with Large Data Sets at Uberby Joseph Wang (Machine Learning Archives - Uber Engineering Blog) on December 10, 2019
Michelangelo, Uber’s machine learning (ML) platform, powers machine learning model training across various use cases at Uber, such as forecasting rider demand, fraud detection, food discovery and recommendation for Uber Eats, and improving the accuracy of … The post Productionizing Distributed XGBoost to Train Deep Tree Models with Large Data Sets at Uber appeared first on Uber Engineering Blog.
-
From unlikely start-up to major scientific organisation: Entering our tenth year at DeepMindby DeepMind Blog on December 5, 2019
Since we started DeepMind nearly 10 years ago, our mission has been to unlock answers to the world’s biggest questions by understanding and recreating intelligence itself.
-
Announcing the 2020 Uber AI Residencyby Ersin Yumer (Machine Learning Archives - Uber Engineering Blog) on November 26, 2019
Connecting the digital and physical worlds safely and reliably on the Uber platform presents exciting technological challenges and opportunities. For Uber, artificial intelligence (AI) is essential to developing systems that are capable of optimized, automated decision making at scale. AI … The post Announcing the 2020 Uber AI Residency appeared first on Uber Engineering Blog.
-
Strengthening the AI communityby DeepMind Blog on November 21, 2019
Most people have at least one crossroads moment in their life - when the choice they make defines their personal or professional trajectory. For me, it was being awarded an internship at Intel, the first one ever through Purdue’s Co-Op Engineering program in 1990.
-
Advanced machine learning helps Play Store users discover personalised appsby DeepMind Blog on November 18, 2019
Over the past few years we've applied DeepMind's technology to Google products and infrastructure, with notable successes like reducing the amount of energy needed for cooling data centers, and extending Android battery performance. We're excited to share more about our work in the coming months.
-
AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learningby DeepMind Blog on October 30, 2019
AlphaStar is the first AI to reach the top league of a widely popular esport without any game restrictions. This January, a preliminary version of AlphaStar challenged two of the world's top players in StarCraft II, one of the most enduring and popular real-time strategy video games of all time. Since then, we have taken on a much greater challenge: playing the full game at a Grandmaster level under professionally approved conditions.
-
Evolving Michelangelo Model Representation for Flexibility at Scaleby Anne Holler (Machine Learning Archives - Uber Engineering Blog) on October 16, 2019
Michelangelo, Uber’s machine learning (ML) platform, supports the training and serving of thousands of models in production across the company. Designed to cover the end-to-end ML workflow, the system currently supports classical machine learning, time series forecasting, and deep … The post Evolving Michelangelo Model Representation for Flexibility at Scale appeared first on Uber Engineering Blog.
-
Causal Bayesian Networks: A flexible tool to enable fairer machine learningby DeepMind Blog on October 3, 2019
Decisions based on machine learning (ML) are potentially advantageous over human decisions, as they do not suffer from the same subjectivity, and can be more accurate and easier to analyse. At the same time, data used to train ML systems often contain human and societal biases that can lead to harmful decisions: extensive evidence in areas such as hiring, criminal justice, surveillance, and healthcare suggests that ML decision systems can treat individuals unfavorably […]
-
DeepMind’s health team joins Google Healthby DeepMind Blog on September 18, 2019
Over the last three years, DeepMind has built a team to tackle some of healthcare’s most complex problems—developing AI research and mobile tools that are already having a positive impact on patients and care teams. Today, with our healthcare partners, the team is excited to officially join the Google Health family. Under the leadership of Dr. David Feinberg, and alongside other teams at Google, we’ll now be able to tap into global expertise in areas like app […]
-
Science at Uber: Improving Transportation with Artificial Intelligenceby Wayne Cunningham (Machine Learning Archives - Uber Engineering Blog) on September 17, 2019
At Uber, we take advanced research work and use it to solve real world problems. In our Science at Uber video series, Uber employees talk about how we apply data science, artificial intelligence, machine learning, and other innovative technologies … The post Science at Uber: Improving Transportation with Artificial Intelligence appeared first on Uber Engineering Blog.
-
The Podcast: Episode 8: Demis Hassabis - The interviewby DeepMind Blog on September 17, 2019
In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind.
-
Three Approaches to Scaling Machine Learning with Uber Seattle Engineeringby Bea Schuster (Machine Learning Archives - Uber Engineering Blog) on September 11, 2019
Uber’s services require real-world coordination between a wide range of customers, including driver-partners, riders, restaurants, and eaters. Accurately forecasting things like rider demand and ETAs enables this coordination, which makes our services work as seamlessly as possible. In an effort … The post Three Approaches to Scaling Machine Learning with Uber Seattle Engineering appeared first on Uber Engineering Blog.
-
Science at Uber: Powering Machine Learning at Uberby Wayne Cunningham (Machine Learning Archives - Uber Engineering Blog) on September 10, 2019
At Uber, we take advanced research work and use it to solve real world problems. In our Science at Uber video series, Uber employees talk about how we apply data science, artificial intelligence, machine learning, and other innovative technologies … The post Science at Uber: Powering Machine Learning at Uber appeared first on Uber Engineering Blog.
-
The Podcast: Episode 7: Towards the futureby DeepMind Blog on September 10, 2019
AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how.
-
Replay in biological and artificial neural networksby DeepMind Blog on September 6, 2019
Our waking and sleeping lives are punctuated by fragments of recalled memories: a sudden connection in the shower between seemingly disparate thoughts, or an ill-fated choice decades ago that haunts us as we struggle to fall asleep. By measuring memory retrieval directly in the brain, neuroscientists have noticed something remarkable: spontaneous recollections, measured directly in the brain, often occur as very fast sequences of multiple memories. These so-called 'replay' […]
-
The Podcast: Episode 6: AI for everyoneby DeepMind Blog on September 3, 2019
While there is a lot of excitement about AI research, there are also concerns about the way it might be implemented, used and abused.
-
The Podcast: Episode 5: Out of the labby DeepMind Blog on August 27, 2019
The ambition of AI research is to create systems that can help to solve problems in the real world.
-
Advancing AI: A Conversation with Jeff Clune, Senior Research Manager at Uberby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on August 21, 2019
The past few months have been a whirlwind for Jeff Clune, Senior Research Manager at Uber and a founding member of Uber AI Labs. In June 2019, research by him and his collaborators on POET, an algorithm … The post Advancing AI: A Conversation with Jeff Clune, Senior Research Manager at Uber appeared first on Uber Engineering Blog.
-
The Podcast: Episode 4: AI, Robotby DeepMind Blog on August 20, 2019
Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy.
-
The Podcast: Episode 3: Life is like a gameby DeepMind Blog on August 19, 2019
Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute.
-
The Podcast: Episode 2: Go to Zeroby DeepMind Blog on August 18, 2019
In March 2016, more than 200 million people watched AlphaGo become first computer program to defeat a professional human player at the game of Go, a milestone in AI research that was considered to be a decade ahead of its time.
-
The Podcast: Episode 1: AI and neuroscience - The virtuous circleby DeepMind Blog on August 17, 2019
What can the human brain teach us about AI? And what can AI teach us about our own intelligence? These questions underpin a lot of AI research.
-
Welcome to the DeepMind podcastby DeepMind Blog on August 16, 2019
What’s AI? What can it be used for? Is it safe? And how do I get involved? These are the kinds of questions we often get asked at public events like science festivals, talks and workshops. We love answering them and really value the conversations and thinking they provoke.
-
Using machine learning to accelerate ecological researchby DeepMind Blog on August 8, 2019
The Serengeti is one of the last remaining sites in the world that hosts an intact community of large mammals. These animals roam over vast swaths of land, some migrating thousands of miles across multiple countries following seasonal rainfall. As human encroachment around the region becomes more intense, these species are forced to alter their behaviours in order to survive. Increasing agriculture, poaching, and climate abnormalities contribute to changes in animal […]
-
Using AI to give doctors a 48-hour head start on life-threatening illnessby DeepMind Blog on July 31, 2019
Artificial intelligence can now predict one of the leading causes of avoidable patient harm up to two days before it happens, as demonstrated by our latest research published in Nature. Working alongside experts from the US Department of Veterans Affairs (VA), we have developed technology that, in the future, could give doctors a 48-hour head start in treating acute kidney injury (AKI), a condition that is associated with over 100,000 people in the UK every year. These […]
-
How evolutionary selection can train more capable self-driving carsby DeepMind Blog on July 25, 2019
Waymo’s self-driving vehicles employ neural networks to perform many driving tasks, from detecting objects and predicting how others will behave, to planning a car's next moves. Training an individual neural net has traditionally required weeks of fine-tuning and experimentation, as well as enormous amounts of computational power. Now, Waymo, in a research collaboration with DeepMind, has taken inspiration from Darwin’s insights into evolution to make this training more […]
-
Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolutionby Alex Gajewski (Machine Learning Archives - Uber Engineering Blog) on July 22, 2019
Tools that enable fast and flexible experimentation democratize and accelerate machine learning research. Take for example the development of libraries for automatic differentiation, such as Theano, Caffe, TensorFlow, and PyTorch: these libraries have been instrumental in … The post Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution appeared first on Uber Engineering Blog.
-
Unsupervised learning: The curious pupilby DeepMind Blog on June 25, 2019
Over the last decade, machine learning has made unprecedented progress in areas as diverse as image recognition, self-driving cars and playing complex games like Go. These successes have been largely realised by training deep neural networks with one of two learning paradigms—supervised learning and reinforcement learning. Both paradigms require training signals to be designed by a human and passed to the computer. In the case of supervised learning, these are the […]
-
Gaining Insights in a Simulated Marketplace with Machine Learning at Uberby Haoyang Chen (Machine Learning Archives - Uber Engineering Blog) on June 24, 2019
At Uber, we use marketplace algorithms to connect drivers and riders. Before the algorithms roll out globally, Uber fully tests and evaluates them to create an optimal user experience that maps to our core marketplace principles. To make product … The post Gaining Insights in a Simulated Marketplace with Machine Learning at Uber appeared first on Uber Engineering Blog.
-
No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolboxby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on June 14, 2019
Machine learning models perform a diversity of tasks at Uber, from improving our maps to streamlining chat communications and even preventing fraud. In addition to serving a variety of use cases, it is important that we make machine learning … The post No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox appeared first on Uber Engineering Blog.
-
Capture the Flag: the emergence of complex cooperative agentsby DeepMind Blog on May 30, 2019
Mastering the strategy, tactical understanding, and team play involved in multiplayer video games represents a critical challenge for AI research. In our latest paper, now published in the journal Science, we present new developments in reinforcement learning, resulting in human-level performance in Quake III Arena Capture the Flag. This is a complex, multi-agent environment and one of the canonical 3D first-person multiplayer games. The agents successfully cooperate with […]
-
Improving Uber’s Mapping Accuracy with CatchMEby Yuehai Xu (Machine Learning Archives - Uber Engineering Blog) on April 25, 2019
Reliable transportation requires a robust map stack that provides services like routing, navigation instructions, and ETA calculation. Errors in map data can significantly impact services, leading to a suboptimal user experience. Uber engineers use various sources of feedback to identify … The post Improving Uber’s Mapping Accuracy with CatchME appeared first on Uber Engineering Blog.
-
Identifying and eliminating bugs in learned predictive modelsby DeepMind Blog on March 28, 2019
Bugs and software have gone hand in hand since the beginning of computer programming. Over time, software developers have established a set of best practices for testing and debugging before deployment, but these practices are not suited for modern deep learning systems. Today, the prevailing practice in machine learning is to train a system on a training data set, and then test it on another set. While this reveals the average-case performance of models, it is also crucial […]
-
Accessible Machine Learning through Data Workflow Managementby Jianyong Zhang (Machine Learning Archives - Uber Engineering Blog) on March 18, 2019
Machine learning (ML) pervades many aspect of Uber’s business. From responding to customer support tickets, optimizing queries, and forecasting demand, ML provides critical insights for many of our teams. Our teams encountered many different challenges while incorporating … The post Accessible Machine Learning through Data Workflow Management appeared first on Uber Engineering Blog.
-
Data Science at Scale: A Conversation with Uber’s Fran Bellby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on March 13, 2019
Fran Bell has always been a scientist; theorizing, modeling and testing how the world works. An ever-curious child, she was fascinated by the natural world, poring over biology and chemistry books, but was never satisfied with just knowing; she … The post Data Science at Scale: A Conversation with Uber’s Fran Bell appeared first on Uber Engineering Blog.
-
TF-Replicator: Distributed Machine Learning for Researchersby DeepMind Blog on March 7, 2019
At DeepMind, the Research Platform Team builds infrastructure to empower and accelerate our AI research. Today, we are excited to share how we developed TF-Replicator, a software library that helps researchers deploy their TensorFlow models on GPUs and Cloud TPUs with minimal effort and no previous experience with distributed systems. TF-Replicator’s programming model has now been open sourced as part of TensorFlow’s tf.distribute.Strategy. This blog post gives an […]
-
Machine learning can boost the value of wind energyby DeepMind Blog on February 26, 2019
Carbon-free technologies like renewable energy help combat climate change, but many of them have not reached their full potential. Consider wind power: over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged. However, the variable nature of wind itself makes it an unpredictable energy source—less useful than one that can reliably deliver power at a set time.
-
Uber Open Source: Catching Up with Fritz Obermeyer and Noah Goodman from the Pyro Teamby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on February 21, 2019
Over the past several years, artificial intelligence (AI) has become an integral component of many enterprise tech stacks, facilitating faster, more efficient solutions for everything from self-driving vehicles to automated messaging platforms. On the AI spectrum, deep probabilistic programming, a … The post Uber Open Source: Catching Up with Fritz Obermeyer and Noah Goodman from the Pyro Team appeared first on Uber Engineering Blog.
-
Introducing Ludwig, a Code-Free Deep Learning Toolboxby Piero Molino (Machine Learning Archives - Uber Engineering Blog) on February 11, 2019
Over the last decade, deep learning models have proven highly effective at performing a wide variety of machine learning tasks in vision, speech, and language. At Uber we are using these models for a variety of tasks, including customer support… The post Introducing Ludwig, a Code-Free Deep Learning Toolbox appeared first on Uber Engineering Blog.
-
AlphaStar: Mastering the real-time strategy game StarCraft IIby DeepMind Blog on January 24, 2019
Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all […]
-
Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uberby Lezhi Li (Machine Learning Archives - Uber Engineering Blog) on January 14, 2019
Machine learning (ML) is widely used across the Uber platform to support intelligent decision making and forecasting for features such as ETA prediction and fraud detection. For optimal results, we invest a lot of resources in developing accurate predictive … The post Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber appeared first on Uber Engineering Blog.
-
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazerby Rui Wang (Machine Learning Archives - Uber Engineering Blog) on January 8, 2019
Jeff Clune and Kenneth O. Stanley were co-senior authors. We are interested in open-endedness at Uber AI Labs because it offers the potential for generating a diverse and ever-expanding curriculum for machine learning entirely on its own. Having vast amounts … The post POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer appeared first on Uber Engineering Blog.
-
Open Source at Uber: Meet Alex Sergeev, Horovod Project Leadby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on December 13, 2018
For Alex Sergeev, the decision to open source his team’s new distributed deep learning framework, Horovod, was an easy one. Tasked with training the machine learning models that power the sensing and perception systems used by our Advanced … The post Open Source at Uber: Meet Alex Sergeev, Horovod Project Lead appeared first on Uber Engineering Blog.
-
AlphaZero: Shedding new light on chess, shogi, and Goby DeepMind Blog on December 6, 2018
In late 2017 we introduced AlphaZero, a single system that taught itself from scratch how to master the games of chess, shogi (Japanese chess), and Go, beating a world-champion program in each case. We were excited by the preliminary results and thrilled to see the response from members of the chess community, who saw in AlphaZero’s games a ground-breaking, highly dynamic and “unconventional” style of play that differed from any chess playing engine that came before it.
-
AlphaFold: Using AI for scientific discoveryby DeepMind Blog on December 2, 2018
We’re excited to share DeepMind’s first significant milestone in demonstrating how artificial intelligence research can drive and accelerate new scientific discoveries. With a strongly interdisciplinary approach to our work, DeepMind has brought together experts from the fields of structural biology, physics, and machine learning to apply cutting-edge techniques to predict the 3D structure of a protein based solely on its genetic sequence.
-
How to Get a Better GAN (Almost) for Free: Introducing the Metropolis-Hastings GANby Ryan Turner (Machine Learning Archives - Uber Engineering Blog) on November 29, 2018
Generative Adversarial Networks (GANs) have achieved impressive feats in realistic image generation and image repair. Art produced by a GAN has even been sold at auction for over $400,000! At Uber, GANs have myriad potential applications, including strengthening our … The post How to Get a Better GAN (Almost) for Free: Introducing the Metropolis-Hastings GAN appeared first on Uber Engineering Blog.
-
Collaboration at Scale: Highlights from Uber Open Summit 2018by Wayne Cunningham (Machine Learning Archives - Uber Engineering Blog) on November 20, 2018
Uber held its first open source summit on November 15, 2018, inviting members of the open source community for presentations given by experts on some of the projects we have contributed in the fields of big data, visualization, machine learning, … The post Collaboration at Scale: Highlights from Uber Open Summit 2018 appeared first on Uber Engineering Blog.
-
Experience in AI: Uber Hires Jan Pedersenby Wayne Cunningham (Machine Learning Archives - Uber Engineering Blog) on November 15, 2018
Whenever a rider gets dropped off at their location, one of our driver-partners finishes a session laden with trips, or an eater gets food delivered to their door, data underlies these interactions on the Uber platform. And our teams could … The post Experience in AI: Uber Hires Jan Pedersen appeared first on Uber Engineering Blog.
-
NVIDIA: Accelerating Deep Learning with Uber’s Horovodby Molly Vorwerck (Machine Learning Archives - Uber Engineering Blog) on November 14, 2018
NVIDIA, inventor of the GPU, creates solutions for building and training AI-enabled systems. In addition to providing hardware and software for much of the industry’s AI research, NVIDIA is building an AI computing platform for developers of self-driving vehicles. With … The post NVIDIA: Accelerating Deep Learning with Uber’s Horovod appeared first on Uber Engineering Blog.
-
Scaling Streams with Googleby DeepMind Blog on November 13, 2018
We’re excited to announce that the team behind Streams - our mobile app that supports doctors and nurses to deliver faster, better care to patients - will be joining Google.
-
My Journey from Working as a Fabric Weaver in Ethiopia to Becoming a Software Engineer at Uber in San Franciscoby Samuel Zemedkun (Machine Learning Archives - Uber Engineering Blog) on November 12, 2018
I was born in Addis Ababa, Ethiopia and was raised there with my five younger sisters. My father made traditional fabrics, weaving one thread at a time. Weaving in Ethiopia is a family business and every member of the family … The post My Journey from Working as a Fabric Weaver in Ethiopia to Becoming a Software Engineer at Uber in San Francisco appeared first on Uber Engineering Blog.
-
Predicting eye disease with Moorfields Eye Hospitalby DeepMind Blog on November 5, 2018
In August, we announced the first stage of our joint research partnership with Moorfields Eye Hospital, which showed how AI could match world-leading doctors at recommending the correct course of treatment for over 50 eye diseases, and also explain how it arrives at its recommendations.
-
Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Developmentby Kevin Stumpf (Machine Learning Archives - Uber Engineering Blog) on October 23, 2018
As a company heavily invested in AI, Uber aims to leverage machine learning (ML) in product development and the day-to-day management of our business. In pursuit of this goal, our data scientists spend considerable amounts of time prototyping and validating … The post Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Development appeared first on Uber Engineering Blog.
-
Applying Customer Feedback: How NLP & Deep Learning Improve Uber’s Mapsby Chun-Chen Kuo (Machine Learning Archives - Uber Engineering Blog) on October 22, 2018
High quality map data powers many aspects of the Uber trip experience. Services such as Search, Routing, and Estimated Time of Arrival (ETA) prediction rely on accurate map data to provide a safe, convenient, and efficient experience for riders, drivers, … The post Applying Customer Feedback: How NLP & Deep Learning Improve Uber’s Maps appeared first on Uber Engineering Blog.
-
Open sourcing TRFL: a library of reinforcement learning building blocksby DeepMind Blog on October 17, 2018
Today we are open sourcing a new library of useful building blocks for writing reinforcement learning (RL) agents in TensorFlow. Named TRFL (pronounced ‘truffle’), it represents a collection of key algorithmic components that we have used internally for a large number of our most successful agents such as DQN, DDPG and the Importance Weighted Actor Learner Architecture.
-
Expanding our research on breast cancer screening to Japanby DeepMind Blog on October 4, 2018
Six months ago, we joined a groundbreaking new research partnership led by the Cancer Research UK Imperial Centre at Imperial College London to explore whether AI technology could help clinicians diagnose breast cancers on mammograms quicker and more effectively.
-
Improving Driver Communication through One-Click Chat, Uber’s Smart Reply Systemby Yue Weng (Machine Learning Archives - Uber Engineering Blog) on September 28, 2018
Imagine standing curbside, waiting for your Uber ride to arrive. On your app, you see that the car is barely moving. You send them a message to find out what’s going on. Unbeknownst to you, your driver-partner is stuck in … The post Improving Driver Communication through One-Click Chat, Uber’s Smart Reply System appeared first on Uber Engineering Blog.
-
Introducing Petastorm: Uber ATG’s Data Access Library for Deep Learningby Robbie Gruener (Machine Learning Archives - Uber Engineering Blog) on September 21, 2018
In recent years, deep learning has taken a central role in solving a wide range of problems in pattern recognition. At Uber Advanced Technologies Group (ATG), we use deep learning to solve various problems in the autonomous driving space, since … The post Introducing Petastorm: Uber ATG’s Data Access Library for Deep Learning appeared first on Uber Engineering Blog.
-
Using AI to plan head and neck cancer treatmentsby DeepMind Blog on September 13, 2018
Early results from our partnership with the Radiotherapy Department at University College London Hospitals NHS Foundation Trust suggest that we are well on our way to developing an artificial intelligence (AI) system that can analyse and segment medical scans of head and neck cancer to a similar standard as expert clinicians. This segmentation process is an essential but time-consuming step when planning radiotherapy treatment. The findings also show that our system can […]
-
Preserving Outputs Precisely while Adaptively Rescaling Targetsby DeepMind Blog on September 13, 2018
Multi-task learning - allowing a single agent to learn how to solve many different tasks - is a longstanding objective for artificial intelligence research. Recently, there has been a lot of excellent progress, with agents like DQN able to use the same algorithm to learn to play multiple games including Breakout and Pong. These algorithms were used to train individual expert agents for each task. As artificial intelligence research advances to more complex real world […]
-
Food Discovery with Uber Eats: Recommending for the Marketplaceby Yuyan Wang (Machine Learning Archives - Uber Engineering Blog) on September 10, 2018
Even as we improve Uber Eats to better understand eaters’ intentions when they use search, there are times when eaters just don’t know what they want to eat. In those situations, the Uber Eats app provides a personalized experience for … The post Food Discovery with Uber Eats: Recommending for the Marketplace appeared first on Uber Engineering Blog.
-
Safety-first AI for autonomous data centre cooling and industrial controlby DeepMind Blog on August 17, 2018
Many of society’s most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.
-
A major milestone for the treatment of eye diseaseby DeepMind Blog on August 13, 2018
We are delighted to announce the results of the first phase of our joint research partnership with Moorfields Eye Hospital, which could potentially transform the management of sight-threatening eye disease.
-
Objects that Soundby DeepMind Blog on August 6, 2018
Visual and audio events tend to occur together: a musician plucking guitar strings and the resulting melody; a wine glass shattering and the accompanying crash; the roar of a motorcycle as it accelerates. These visual and audio stimuli are concurrent because they share a common cause. Understanding the relationship between visual events and their associated sounds is a fundamental way that we make sense of the world around us.
-
Measuring abstract reasoning in neural networksby DeepMind Blog on July 11, 2018
Neural network-based models continue to achieve impressive results on longstanding machine learning problems, but establishing their capacity to reason about abstract concepts has proven difficult. Building on previous efforts to solve this important feature of general-purpose learning systems, our latest paper sets out an approach for measuring abstract reasoning in learning machines, and reveals some important insights about the nature of generalisation itself.
-
DeepMind papers at ICML 2018by DeepMind Blog on July 9, 2018
The 2018 International Conference on Machine Learning will take place in Stockholm, Sweden from 10-15 July. For those attending and planning the week ahead, we are sharing a schedule of DeepMind presentations at ICML (you can download a pdf version here). We look forward to the many engaging discussions, ideas, and collaborations that are sure to arise from the conference!
-
DeepMind Health Response to Independent Reviewers' Report 2018by DeepMind Blog on June 15, 2018
When we set up DeepMind Health we believed that pioneering technology should be matched with pioneering oversight. That’s why when we launched in February 2016, we did so with an unusual and additional mechanism: a panel of Independent Reviewers, who meet regularly throughout the year to scrutinise our work. This is an innovative approach within tech companies - one that forces us to question not only what we are doing, but how and why we are doing it - and we believe that […]
-
Neural scene representation and renderingby DeepMind Blog on June 14, 2018
There is more than meets the eye when it comes to how we understand a visual scene: our brains draw on prior knowledge to reason and to make inferences that go far beyond the patterns of light that hit our retinas. For example, when entering a room for the first time, you instantly recognise the items it contains and where they are positioned. If you see three legs of a table, you will infer that there is probably a fourth leg with the same shape and colour hidden from view. […]