ML and AI Research

  • From Gaming to Enterprise AI: Don’t Miss NVIDIA’s Computex 2021 Keynote
    by Brian Caulfield (The Official NVIDIA Blog) on May 19, 2021

    NVIDIA will deliver a double-barrelled keynote packed with innovations in AI, the cloud, data centers and gaming at Computex 2021 in Taiwan, on June 1. NVIDIA’s Jeff Fisher, senior vice president of GeForce gaming products, will discuss how NVIDIA is addressing the explosive growth in worldwide gaming. And Manuvir Das, head of enterprise computing at Read article > The post From Gaming to Enterprise AI: Don’t Miss NVIDIA’s Computex 2021 Keynote appeared first on The […]

  • Speed up YOLOv4 inference to twice as fast on Amazon SageMaker
    by Santosh Bhavani (AWS Machine Learning Blog) on May 18, 2021

    Machine learning (ML) models have been deployed successfully across a variety of use cases and industries, but due to the high computational complexity of recent ML models such as deep neural networks, inference deployments have been limited by performance and cost constraints. To add to the challenge, preparing a model for inference involves packaging the

  • Amazon Lookout for Vision Accelerator Proof of Concept (PoC) Kit
    by Amit Gupta (AWS Machine Learning Blog) on May 18, 2021

    Amazon Lookout for Vision is a machine learning service that spots defects and anomalies in visual representations using computer vision. With Amazon Lookout for Vision, manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale. Basler and Amazon Lookout for Vision have collaborated to launch the “Amazon

  • Google I/O 2021: Being helpful in moments that matter
    by (AI) on May 18, 2021

    It’s great to be back hosting our I/O Developers Conference this year. Pulling up to our Mountain View campus this morning, I felt a sense of normalcy for the first time in a long while. Of course, it’s not the same without our developer community here in person. COVID-19 has deeply affected our entire global community over the past year and continues to take a toll. Places such as Brazil, and my home country of India, are now going through their most difficult moments […]

  • Tackling tuberculosis screening with AI
    by (AI) on May 18, 2021

    Today we’re sharing new AI research that aims to improve screening for one of the top causes of death worldwide: tuberculosis (TB). TB infects 10 million people per year and disproportionately affects people in low-to-middle-income countries. Diagnosing TB early is difficult because its symptoms can mimic those of common respiratory diseases.Cost-effective screening, specifically chest X-rays, has been identified as one way to improve the screening process. However, […]

  • Using AI to help find answers to common skin conditions
    by (AI) on May 18, 2021

    Artificial intelligence (AI) has the potential to help clinicians care for patients and treat disease — from improving the screening process for breast cancer to helping detect tuberculosis more efficiently. When we combine these advances in AI with other technologies, like smartphone cameras, we can unlock new ways for people to stay better informed about their health, too.  Today at  I/O, we shared a preview of an AI-powered dermatology assist tool that helps you […]

  • A smoother ride and a more detailed Map thanks to AI
    by (AI) on May 18, 2021

    AI is a critical part of what makes Google Maps so helpful. With it, we’re able to map roads over 10 times faster than we could five years ago, and we can bring maps filled with useful information to virtually every corner of the world. Today, we’re giving you a behind-the-scenes look at how AI makes two of the features we announced at I/O possible.Teaching Maps to identify and forecast when people are hitting the brakesLet’s start with our routing update that helps […]

  • Unveiling our new Quantum AI campus
    by (AI) on May 18, 2021

    Within the decade, Google aims to build a useful, error-corrected quantum computer. This will accelerate solutions for some of the world’s most pressing problems, like sustainable energy and reduced emissions to feed the world’s growing population, and unlocking new scientific discoveries, like more helpful AI.To begin our journey, today we’re unveiling our new Quantum AI campus in Santa Barbara, California. This campus includes our first quantum data center, our […]

  • LaMDA: our breakthrough conversation technology
    by (AI) on May 18, 2021

    We've always had a soft spot for language at Google. Early on, we set out to translate the web. More recently, we’ve invented machine learning techniques that help us better grasp the intent of Search queries. Over time, our advances in these and other areas have made it easier and easier to organize and access the heaps of information conveyed by the written and spoken word.But there’s always room for improvement. Language is remarkably nuanced and adaptable. It can be […]

  • A Further Step to Getting GeForce Cards into the Hands of Gamers
    by Matt Wuebbling (The Official NVIDIA Blog) on May 18, 2021

    GeForce products are made for gamers — and packed with innovations. Our RTX 30 Series is built on our second-generation RTX architecture, with dedicated RT Cores and Tensor Cores, delivering amazing visuals and performance to gamers and creators. Because NVIDIA GPUs are programmable, users regularly discover new applications for them, from weather simulation and gene Read article > The post A Further Step to Getting GeForce Cards into the Hands of Gamers appeared first […]

  • NVIDIA BlueField DPUs Fuel Unprecedented Data Center Transformation
    by Itay Ozery (The Official NVIDIA Blog) on May 17, 2021

    Cloud computing and AI are pushing the boundaries of scale and performance for data centers. Anticipating this shift, industry leaders such as Baidu, Palo Alto Networks, Red Hat and VMware are using NVIDIA BlueField DPUs to transform their data center platforms into higher performing, more secure, agile platforms and bring differentiated products and services to Read article > The post NVIDIA BlueField DPUs Fuel Unprecedented Data Center Transformation appeared first on […]

  • DiDi Chooses NVIDIA DRIVE for New Fleet of Self-Driving Robotaxis
    by Katie Burke (The Official NVIDIA Blog) on May 17, 2021

    Robotaxis are one major step closer to becoming reality. DiDi Autonomous Driving, the self-driving technology arm of mobility technology leader Didi Chuxing, announced last month a strategic partnership with Volvo Cars on autonomous vehicles for DiDi’s self-driving test fleet. Volvo Cars’ autonomous drive-ready XC90 cars will be the first to integrate DiDi Gemini, a new Read article > The post DiDi Chooses NVIDIA DRIVE for New Fleet of Self-Driving Robotaxis appeared […]

  • Microsoft and NVIDIA introduce parameter-efficient multimodal transformers for video representation learning
    by Alexis Hagen (Microsoft Research) on May 17, 2021

    Understanding video is one of the most challenging problems in AI, and an important underlying requirement is learning multimodal representations that capture information about objects, actions, sounds, and their long-range statistical dependencies from audio-visual signals. Recently, transformers have been successful in vision-and-language tasks such as image captioning and visual question answering due to their ability to The post Microsoft and NVIDIA introduce […]

  • Prepare data for predicting credit risk using Amazon SageMaker Data Wrangler and Amazon SageMaker Clarify
    by Courtney McKay (AWS Machine Learning Blog) on May 14, 2021

    For data scientists and machine learning (ML) developers, data preparation is one of the most challenging and time-consuming tasks of building ML solutions. In an often iterative and highly manual process, data must be sourced, analyzed, cleaned, and enriched before it can be used to train an ML model. Typical tasks associated with data preparation

  • AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments
    by Scott Martin (The Official NVIDIA Blog) on May 14, 2021

    With live sports making a comeback, one thing remains a constant: Nobody likes to miss big plays while waiting in line for a cold drink or snack. Zippin offers sports fans checkout-free refreshments, and it’s racking up wins among stadiums as well as retailers, hotels, apartments and offices. The startup, based in San Francisco, develops Read article > The post AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments appeared first on The […]

  • Maximize TensorFlow performance on Amazon SageMaker endpoints for real-time inference
    by Chaitanya Hazarey (AWS Machine Learning Blog) on May 13, 2021

    Machine learning (ML) is realized in inference. The business problem you want your ML model to solve is the inferences or predictions that you want your model to generate. Deployment is the stage in which a model, after being trained, is ready to accept inference requests. In this post, we describe the parameters that you

  • Build BI dashboards for your Amazon SageMaker Ground Truth labels and worker metadata
    by Vidya Sagar Ravipati (AWS Machine Learning Blog) on May 13, 2021

    This is the second in a two-part series on the Amazon SageMaker Ground Truth hierarchical labeling workflow and dashboards. In Part 1: Automate multi-modality, parallel data labeling workflows with Amazon SageMaker Ground Truth and AWS Step Functions, we looked at how to create multi-step labeling workflows for hierarchical label taxonomies using AWS Step Functions. In

  • CHI 2021: Making remote and hybrid meetings work in the new future of work
    by Alexis Hagen (Microsoft Research) on May 13, 2021

    Over the course of the COVID-19 pandemic, some truths about the nature of work have been underscored: it is uniquely complex, quickly shifting, and increasingly technology-mediated. Teaching, medicine, mental health, and other professions—previously thought to be near-impossible to do remotely—have all abruptly moved to online and hybrid mediums. All kinds of workers have needed to The post CHI 2021: Making remote and hybrid meetings work in the new future of work […]

  • GFN Thursday Set to Evolve as Biomutant Comes to GeForce NOW on May 25
    by GeForce NOW Community (The Official NVIDIA Blog) on May 13, 2021

    GeForce NOW is always evolving, and so is this week’s GFN Thursday. Biomutant, the new open-world action RPG from Experiment 101 and THQ Nordic, is coming to GeForce NOW when it releases on May 25. Everybody Was Kung Fu Fighting Biomutant puts you in the role of an anthropomorphic rodent with swords, guns and martial Read article > The post GFN Thursday Set to Evolve as Biomutant Comes to GeForce NOW on May 25 appeared first on The Official NVIDIA Blog.

  • Build a scalable machine learning pipeline for ultra-high resolution medical images using Amazon SageMaker
    by Karan Sindwani (AWS Machine Learning Blog) on May 12, 2021

    Neural networks have proven effective at solving complex computer vision tasks such as object detection, image similarity, and classification. With the evolution of low-cost GPUs, the computational cost of building and deploying a neural network has drastically reduced. However, most techniques are designed to handle pixel resolutions commonly found in visual media. For example, typical

  • Build a cognitive search and a health knowledge graph using AWS AI services
    by Prithiviraj Jothikumar (AWS Machine Learning Blog) on May 11, 2021

    Medical data is highly contextual and heavily multi-modal, in which each data silo is treated separately. To bridge different data, a knowledge graph-based approach integrates data across domains and helps represent the complex representation of scientific knowledge more naturally. For example, three components of major electronic health records (EHR) are diagnosis codes, primary notes, and

  • Improve the streaming transcription experience with Amazon Transcribe partial results stabilization
    by Alex Chirayath (AWS Machine Learning Blog) on May 11, 2021

    Whether you’re watching a live broadcast of your favorite soccer team, having a video chat with a vendor, or calling your bank about a loan payment, streaming speech content is everywhere. You can apply a streaming transcription service to generate subtitles for content understanding and accessibility, to create metadata to enable search, or to extract

  • The Washington Post Launches Audio Articles Voiced by Amazon Polly 
    by Esther Lee (AWS Machine Learning Blog) on May 11, 2021

    AWS is excited to announce that The Washington Post is integrating Amazon Polly to provide their readers with audio access to stories across The Post’s entire spectrum of web and mobile platforms, starting with technology stories. Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories

  • Keeping Games Up to Date in the Cloud
    by Andrew Fear (The Official NVIDIA Blog) on May 11, 2021

    GeForce NOW ensures your favorite games are automatically up to date, avoiding game updates and patches. Simply login, click PLAY, and enjoy an optimal cloud gaming experience. Here’s an overview on how the service keeps your library game ready at all times. Updating Games for All GeForce NOW Members When a gamer downloads an update Read article > The post Keeping Games Up to Date in the Cloud appeared first on The Official NVIDIA Blog.

  • Create in Record Time with New NVIDIA Studio Laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer
    by Gerardo Delgado (The Official NVIDIA Blog) on May 11, 2021

    New NVIDIA Studio laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer were announced today as part of the record-breaking GeForce laptop launch. The new Studio laptops are powered by GeForce RTX 30 Series and NVIDIA RTX professional laptop GPUs, including designs with the new GeForce RTX 3050 Ti and 3050 laptop GPUs, and the Read article > The post Create in Record Time with New NVIDIA Studio Laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer appeared first on The […]

  • OpenAI Scholars 2021: Final Projects
    by OpenAI (OpenAI) on May 10, 2021

    We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI. Working alongside leading OpenAI researchers that created GPT-3 and DALL·E, our Scholars explored topics like AI

  • Hands-on research and prototyping for haptics
    by Brenda Potts (Microsoft Research) on May 10, 2021

    While many of us think of human-computer interaction as a job for the eyes, ears and mind, we don’t think as often about the importance and complexity of our tactile interactions with computers. Haptics – the sense of touch or tactile sensations – permeates our computing experience, though we are often so habituated to it The post Hands-on research and prototyping for haptics appeared first on Microsoft Research.

  • Build an anomaly detection model from scratch with Amazon Lookout for Vision
    by Niklas Palm (AWS Machine Learning Blog) on May 10, 2021

    A common problem in manufacturing is verifying that products meet quality standards. You can use manual inspection on a subset of the products, but it’s usually not scalable enough to meet demand as production grows. In this post, I go through the steps of creating an end-to-end machine vision solution that identifies visual anomalies in

  • Build an intelligent search solution with automated content enrichment
    by Abhinav Jawadekar (AWS Machine Learning Blog) on May 7, 2021

    Unstructured data belonging to the enterprise continues to grow, making it a challenge for customers and employees to get the information they need. Amazon Kendra is a highly accurate intelligent search service powered by machine learning (ML). It helps you easily find the content you’re looking for, even when it’s scattered across multiple locations and

  • Advancing sports analytics through AI research
    on May 7, 2021

    Sports analytics is in the midst of a remarkably important era, offering interesting opportunities for AI researchers and sports leaders alike.

  • Microsoft Research collaborates with KAIST in Korea to explore bimanual interactions with haptic feedback in virtual reality
    by Alexis Hagen (Microsoft Research) on May 6, 2021

    Editor’s Note: Bimanual controllers are frequently used to enhance the realism and immersion of virtual reality experiences such as games and simulations. Researchers have typically relied on mechanical linkages between the controllers to recreate the sensation of holding different objects with both hands. However, those linkages cannot quickly adapt to simulate dynamic objects. They also The post Microsoft Research collaborates with KAIST in Korea to explore bimanual […]

  • Create a serverless pipeline to translate large documents with Amazon Translate
    by Jay Rao (AWS Machine Learning Blog) on May 6, 2021

    In our previous post, we described how to translate documents using the real-time translation API from Amazon Translate and AWS Lambda. However, this method may not work for files that are too large. They may take too much time, triggering the 15-minute timeout limit of Lambda functions. One can use batch API, but this is available only in seven AWS Regions (as

  • How Genworth built a serverless ML pipeline on AWS using Amazon SageMaker and AWS Glue
    by Liam Pearson (AWS Machine Learning Blog) on May 6, 2021

    This post is co-written with Liam Pearson, a Data Scientist at Genworth Mortgage Insurance Australia Limited. Genworth Mortgage Insurance Australia Limited is a leading provider of lenders mortgage insurance (LMI) in Australia; their shares are traded on Australian Stock Exchange as ASX: GMA. Genworth Mortgage Insurance Australia Limited is a lenders mortgage insurer with over

  • Perform batch fraud predictions with Amazon Fraud Detector without writing code or integrating an API
    by Bilal Ali (AWS Machine Learning Blog) on May 6, 2021

    Amazon Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities, such as the creation of fake accounts or online payment fraud. Unlike general-purpose machine learning (ML) packages, Amazon Fraud Detector is designed specifically to detect fraud. Amazon Fraud Detector combines your data, the latest in ML science,

  • Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network
    by Rick Merritt (The Official NVIDIA Blog) on May 6, 2021

    In 2019, the U.S. Postal Service had a need to identify and track items in its torrent of more than 100 million pieces of daily mail. A USPS AI architect had an idea. Ryan Simpson wanted to expand an image analysis system a postal team was developing into something much broader that could tackle this Read article > The post Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network appeared first on The Official NVIDIA Blog.

  • GFN Thursday: 61 Games Join GeForce NOW Library in May
    by GeForce NOW Community (The Official NVIDIA Blog) on May 6, 2021

    May’s shaping up to be a big month for bringing fan-favorites to GeForce NOW. And since it’s the first week of the month, this week’s GFN Thursday is all about the games members can look forward to joining the service this month. In total, we’re adding 61 games to the GeForce NOW library in May, Read article > The post GFN Thursday: 61 Games Join GeForce NOW Library in May appeared first on The Official NVIDIA Blog.

  • Game theory as an engine for large-scale data analysis
    on May 6, 2021

    Our research explored a new approach to an old problem: we reformulated principal component analysis (PCA), a type of eigenvalue problem, as a competitive multi-agent game we call EigenGame.

  • Automatically scale Amazon Kendra query capacity units with Amazon EventBridge and AWS Lambda
    by Juan Bustos (AWS Machine Learning Blog) on May 5, 2021

    Data is proliferating inside the enterprise and employees are using more applications than ever before to get their jobs done, in fact according to Okta Inc., the number of software apps deployed by large firms across all industries world-wide has increased 68%, reaching an average of 129 apps per company. As employees continue to self-serve

  • Automate multi-modality, parallel data labeling workflows with Amazon SageMaker Ground Truth and AWS Step Functions
    by Vidya Sagar Ravipati (AWS Machine Learning Blog) on May 5, 2021

    This is the first in a two-part series on the Amazon SageMaker Ground Truth hierarchical labeling workflow and dashboards. In Part 1, we look at creating multi-step labeling workflows for hierarchical label taxonomies using AWS Step Functions. In Part 2 (coming soon), we look at how to build dashboards for analyzing dataset annotations and worker

  • Woolaroo: a new tool for exploring indigenous languages
    by (AI) on May 5, 2021

    “Our dictionary doesn’t have a word for shoe” my Uncle Allan Lena said, so when kids ask him what to call it in Yugambeh, he’ll say “jinung gulli” - a foot thing.Uncle Allan Lena is a frontline worker in the battle to reteach the Yugambeh Aboriginal language to the children of southeast Queensland, Australia, where it hasn’t been spoken fluently for decades and thus is – like many other languages around the world – in danger of disappearing.  For the […]

  • Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale
    by Lauren Finkle (The Official NVIDIA Blog) on May 5, 2021

    With only one U.S. state without a Walmart supercenter — and over 4,600 stores across the country — the retail giant’s prediction analytics work with data on an enormous scale. Grant Gelven, a machine learning engineer at Walmart Global Tech, joined NVIDIA AI Podcast host Noah Kravitz for the latest episode of the AI Podcast. Read article > The post Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale appeared first on The […]

  • Segment paragraphs and detect insights with Amazon Textract and Amazon Comprehend
    by Mona Mona (AWS Machine Learning Blog) on May 5, 2021

    Many companies extract data from scanned documents containing tables and forms, such as PDFs. Some examples are audit documents, tax documents, whitepapers, or customer review documents. For customer reviews, you might be extracting text such as product reviews, movie reviews, or feedback. Further understanding of the individual and overall sentiment of the user base from

  • Advancing Excel as a programming language with Andy Gordon and Simon Peyton Jones
    by Alyssa Hughes (Microsoft Research) on May 5, 2021

    Today, people around the globe—from teachers to small-business owners to finance executives—use Microsoft Excel to make sense of the information that occupies their respective worlds, and whether they realize it or not, in doing so, they’re taking on the role of programmer. In this episode, Senior Principal Research Manager Andy Gordon, who leads the Calc Intelligence team at Microsoft Research, and Senior Principal Researcher Simon Peyton Jones provide an inside […]

  • ML-Agents v2.0 release: Now supports training complex cooperative behaviors
    by Marwan Mattar (Machine Learning – Unity Technologies Blog) on May 5, 2021

    About one year ago, we announced the release of the ML-Agents v1.0 Unity package, which was verified for the 2020.2 Editor release. Today, we’re delighted to announce the v2.0 release of the ML-Agents Unity package, currently on track to be verified for the 2021.2 Editor release. Over this past year, we’ve made more than fifteen The post ML-Agents v2.0 release: Now supports training complex cooperative behaviors appeared first on Unity Technologies Blog.

  • Achieve 12x higher throughput and lowest latency for PyTorch Natural Language Processing applications out-of-the-box on AWS Inferentia
    by Fabio Nonato de Paula (AWS Machine Learning Blog) on May 4, 2021

    AWS customers like Snap, Alexa, and Autodesk have been using AWS Inferentia to achieve the highest performance and lowest cost on a wide variety of machine learning (ML) deployments. Natural language processing (NLP) models are growing in popularity for real-time and offline batched use cases. Our customers deploy these models in many applications like support

  • AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC
    by Kate Kallot (The Official NVIDIA Blog) on May 4, 2021

    Major tech conferences are typically hosted in highly industrialized countries. But the appetite for AI and data science resources spans the globe — with an estimated 3 million developers in emerging markets. Our recent GPU Technology Conference — virtual, free to register, and featuring 24/7 content — for the first time featured a dedicated track on Read article > The post AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC appeared […]

  • Will Hurd Joins OpenAI’s Board of Directors
    by OpenAI (OpenAI) on May 4, 2021

    We’re delighted to announce that Congressman Will Hurd has joined our board of directors.

  • Conversations with data: Advancing the state of the art in language-driven data exploration
    by Alexis Hagen (Microsoft Research) on May 3, 2021

    One key aspiration of AI is to develop natural and effective task-oriented conversational systems. Task-oriented conversational systems use a natural language interface to collaborate with and support people in accomplishing specific goals and activities. They go beyond chitchat conversation. For example, as personal digital assistants, they ease the stress of trip planning or reduce the The post Conversations with data: Advancing the state of the art in language-driven […]

  • Creating an end-to-end application for orchestrating custom deep learning HPO, training, and inference using AWS Step Functions
    by Mehdi Far (AWS Machine Learning Blog) on May 3, 2021

    Amazon SageMaker hyperparameter tuning provides a built-in solution for scalable training and hyperparameter optimization (HPO). However, for some applications (such as those with a preference of different HPO libraries or customized HPO features), we need custom machine learning (ML) solutions that allow retraining and HPO. This post offers a step-by-step guide to build a custom deep

  • EverParse: Hardening critical attack surfaces with formally proven message parsers
    by Alexis Hagen (Microsoft Research) on May 3, 2021

    EverParse is a framework for generating provably secure parsers and formatters used to improve the security of critical code bases at Microsoft. EverParse is developed as part of Project Everest, a collaboration between Microsoft Research labs in Redmond, Washington; India; and Cambridge, United Kingdom; the Microsoft Research-Inria Joint Centre; Inria; Carnegie Mellon University; and several The post EverParse: Hardening critical attack surfaces with formally proven […]

  • Introducing hierarchical deletion to easily clean up unused resources in Amazon Forecast
    by Alex Kim (AWS Machine Learning Blog) on April 30, 2021

    Amazon Forecast just launched the ability to hierarchically delete resources at a parent level without having to locate the child resources. You can stay focused on building value-adding forecasting systems and not worry about trying to manage individual resources that are created in your workflow. Forecast uses machine learning (ML) to generate more accurate demand

  • Around the World in AI Ways: Video Explores Machine Learning’s Global Impact
    by Rick Merritt (The Official NVIDIA Blog) on April 30, 2021

    You may have used AI in your smartphone or smart speaker, but have you seen how it comes alive in an artist’s brush stroke, how it animates artificial limbs or assists astronauts in Earth’s orbit? The latest video in the “I Am AI” series — the annual scene setter for the keynote at NVIDIA’s GTC Read article > The post Around the World in AI Ways: Video Explores Machine Learning’s Global Impact appeared first on The Official NVIDIA Blog.

  • Update Complete: GFN Thursday Brings New Features, Games and More
    by GeForce NOW Community (The Official NVIDIA Blog) on April 29, 2021

    No Thursday is complete without GFN Thursday, our weekly celebration of the news, updates and great games GeForce NOW members can play — all streaming from the cloud across nearly all of your devices. This week’s exciting updates to the GeForce NOW app and experience Include updated features, faster session loading and a bunch of Read article > The post Update Complete: GFN Thursday Brings New Features, Games and More appeared first on The Official NVIDIA Blog.

  • GFN Thursday: Rolling in the Deep (Silver) with Major ‘Metro Exodus’ and ‘Iron Harvest’ Updates
    by GeForce NOW Community (The Official NVIDIA Blog) on April 29, 2021

    Editor’s Note: During game onboarding for the Metro Exodus PC Enhanced Edition, we discovered that the update requires a version of Windows Server that isn’t currently supported on GeForce NOW. The required version is scheduled for release later this year; at which time we’ll begin upgrading our servers. We’ll provide additional information and timing as Read article > The post GFN Thursday: Rolling in the Deep (Silver) with Major ‘Metro Exodus’ and ‘Iron […]

  • When artists and machine intelligence come together
    by (AI) on April 29, 2021

    Throughout history, from photography to video to hypertext, artists have pushed the expressive limits of new technologies, and artificial intelligence is no exception. At I/O 2019, Google Research and Google Arts & Culture launched the Artists + Machine Intelligence Grants, providing a range of support and technical mentorship to six artists from around the globe following an open call for proposals. The inaugural grant program sought to expand the field of artists […]

  • Perceiving with Confidence: How AI Improves Radar Perception for Autonomous Vehicles
    by Neda Cvijetic (The Official NVIDIA Blog) on April 28, 2021

    Autonomous vehicles don’t just need to detect the moving traffic that surrounds them — they must also be able to tell what isn’t in motion. The post Perceiving with Confidence: How AI Improves Radar Perception for Autonomous Vehicles appeared first on The Official NVIDIA Blog.

  • Universal Scene Description Key to Shared Metaverse, GTC Panelists Say 
    by Brian Caulfield (The Official NVIDIA Blog) on April 27, 2021

    Artists and engineers, architects, and automakers are coming together around a new standard — born in the digital animation industry — that promises to weave all our virtual worlds together. That’s the conclusion of a group of panelists from a wide range of industries who gathered at NVIDIA GTC21 to talk about Pixar’s Universal Scene Read article > The post Universal Scene Description Key to Shared Metaverse, GTC Panelists Say  appeared first on The Official […]

  • Alexandria in Microsoft Viva Topics: from big data to big knowledge
    by Alexis Hagen (Microsoft Research) on April 26, 2021

    Project Alexandria is a research project within Microsoft Research Cambridge dedicated to discovering entities, or topics of information, and their associated properties from unstructured documents. This research lab has studied knowledge mining research for over a decade, using the probabilistic programming framework Infer.NET. Project Alexandria was established seven years ago to build on Infer.NET and The post Alexandria in Microsoft Viva Topics: from big data to big […]

  • Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects
    by Rick Champagne (The Official NVIDIA Blog) on April 22, 2021

    For the 13th year running, NVIDIA professional GPUs have powered the dazzling visuals and cinematics behind every Academy Award nominee for Best Visual Effects. The 93rd annual Academy Awards will take place on Sunday, April 25, with five VFX nominees in the running: The Midnight Sky Tenet Mulan The One and Only Ivan Love and Read article > The post Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects appeared first on The Official NVIDIA Blog.

  • A whale of a tale about responsibility and AI
    by (AI) on April 22, 2021

    A couple of years ago, Google AI for Social Good’s Bioacoustics team created a ML model that helps the scientific community detect the presence of humpback whale sounds using acoustic recordings. This tool, developed in partnership with the National Oceanic and Atmospheric Association, helps biologists study whale behaviors, patterns, population and potential human interactions. We realized other researchers could use this model for their work, too — it could help them […]

  • How we’re minimizing AI’s carbon footprint
    by (AI) on April 22, 2021

    The book that led to my visit to Google.When I first visited Google back in 2002, I was a computer science professor at UC Berkeley. My colleague John Hennessey and I were updating our textbook on computer architecture, and Larry Page — who rode a hot-rodded electric scooter at the time — agreed to show me how his then three-year-old company designed its computing for Search. I remember the setup was lean yet powerful: just 6,000 low-cost PC servers and 12,000 PC disks […]

  • ZeRO-Infinity and DeepSpeed: Unlocking unprecedented model scale for deep learning training
    by Alexis Hagen (Microsoft Research) on April 19, 2021

    Since the DeepSpeed optimization library was introduced last year, it has rolled out numerous novel optimizations for training large AI models—improving scale, speed, cost, and usability. As large models have quickly evolved over the last year, so too has DeepSpeed. Whether enabling researchers to create the 17-billion-parameter Microsoft Turing Natural Language Generation (Turing-NLG) with state-of-the-art The post ZeRO-Infinity and DeepSpeed: Unlocking unprecedented […]

  • Supercharge your computer vision models with synthetic datasets built by Unity
    by Anthony Navarro (Machine Learning – Unity Technologies Blog) on April 19, 2021

    Is your limited dataset holding back the performance of your computer vision model? Using the power of the Unity Computer Vision Perception Package, Unity can unlock the potential of your computer vision model by generating custom datasets tailored to your specific requirements. Today, Unity Computer Vision Datasets are available to customers worldwide. Find out more The post Supercharge your computer vision models with synthetic datasets built by Unity appeared first on […]

  • On-Demand QA Testing with Unity Automated QA
    by Dylan Scandinaro (Machine Learning – Unity Technologies Blog) on April 16, 2021

    Games are incredibly challenging to test. Game developers build games from components, yet the player interacts with a visual and dynamic world that is much more interesting and complex than a sum of components.   Due to the complexity of modern games, even the most sophisticated QA teams have very limited options to scale the QA The post On-Demand QA Testing with Unity Automated QA appeared first on Unity Technologies Blog.

  • Reinforcing program correctness with reinforcement learning
    by Alexis Hagen (Microsoft Research) on April 14, 2021

    Many of our online activities, from receiving and sending emails to searching for information to streaming movies, are driven behind the scenes by cloud-based distributed architectures. Writing concurrent software—programs with multiple logical threads of execution—is of paramount importance to scale to these growing computing needs. Unfortunately, writing correct concurrent software is challenging. Unit, integration, and The post Reinforcing program correctness with […]

  • Boosting computer vision performance with synthetic data
    by James Fort (Machine Learning – Unity Technologies Blog) on April 9, 2021

    You need a lot of data to train a computer vision model to effectively interpret its surroundings, which can strain resources. Read this guest post to discover how Neural Pocket, an AI solution provider, used synthetic data to enable its computer vision models. Neural Pocket provides end-to-end AI smart city solutions for some of the The post Boosting computer vision performance with synthetic data appeared first on Unity Technologies Blog.

  • How fact checkers and Google.org are fighting misinformation
    by (AI) on March 31, 2021

    Misinformation can have dramatic consequences on people’s lives — from finding reliable information on everything from elections to vaccinations — and the pandemic has only exacerbated the problem as accurate information can save lives. To help fight the rise in misinformation, Full Fact, a nonprofit that provides tools and resources to fact checkers, turned to Google.org for help. Today, ahead of International Fact Checking Day, we’re sharing the impact of this […]

  • Redefining what a map can be with new information and AI
    by (AI) on March 30, 2021

    Sixteen years ago, many of us held a printout of directions in one hand and the steering wheel in the other to get around— without information about the traffic along your route or details about when your favorite restaurant was open. Since then, we’ve been pushing the boundaries of what a map can do, propelled by the latest machine learning. This year, we’re on track to bring over 100 AI-powered improvements to Google Maps so you can get the most accurate, up-to-date […]

  • GPT-3 Powers the Next Generation of Apps
    by OpenAI (OpenAI) on March 25, 2021

    Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.

  • Helping newsrooms experiment together with AI
    by (AI) on March 23, 2021

    In our JournalismAI report, journalists around the world told researchers they are eager to collaborate and explore the benefits of AI, especially as it applies to newsgathering, production and distribution. To facilitate their collaboration, the Google News Initiative and Polis – the journalism think tank at the London School of Economics and Political Science – are launching the JournalismAI Collab Challenges, an opportunity for three groups of five newsrooms from the […]

  • What drives Nithya Sambasivan’s fight for fairness
    by (AI) on March 22, 2021

    When Nithya Sambasivan was finishing her undergraduate degree in engineering, she felt slightly unsatisfied. “I wanted to know, ‘how will the technology I build impact people?’” she says. Luckily, she would soon discover the field of Human Computer Interaction (HCI) and pursue her graduate degrees. She completed her master’s and PhD in HCI focusing on technology design for low-income communities in India. “I worked with sex workers, slum communities, […]

  • Multimodal Neurons in Artificial Neural Networks
    by OpenAI (OpenAI) on March 4, 2021

    We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually.

  • Marian Croak’s vision for responsible AI at Google
    by (AI) on February 18, 2021

    Dr. Marian Croak has spent decades working on groundbreaking technology, with over 200 patents in areas such as Voice over IP, which laid the foundation for the calls we all use to get things done and stay in touch during the pandemic. For the past six years she’s been a VP at Google working on everything from site reliability engineering to bringing public Wi-Fi to India’s railroads.Now, she’s taking on a new project: making sure Google develops artificial […]

  • Scaling Kubernetes to 7,500 Nodes
    by OpenAI (OpenAI) on January 25, 2021

    We've scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models. Scaling a single Kubernetes cluster to this size is rarely done

  • Meet the researcher creating more access with language
    by (AI) on January 11, 2021

    When you’ve got your hands full, so you use your voice to ask your phone to play your favorite song, it can feel like magic. In reality, it’s a more complicated combination of engineering, design and natural language processing at work, making it easier for many of us to use our smartphones. But what happens when this voice technology isn’t available in our own language? This is something Google India researcher Shachi Dave considers as part of her day-to-day work. […]

  • DALL·E: Creating Images from Text
    by OpenAI (OpenAI) on January 5, 2021

    We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.

  • CLIP: Connecting Text and Images
    by OpenAI (OpenAI) on January 5, 2021

    We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision.

  • Just desserts: Baking with AI-made recipes
    by (AI) on January 5, 2021

    It’s winter, it’s the holidays and it’s quarantine-times: It’s the perfect recipe for doing a ton of baking. In fact, U.S. search interest in "baking" spiked in both November and December 2020. But being in the AI field, we decided to dive a little deeper into the trend and try to understand the science behind what makes cookies crunchy, cake spongy and bread fluffy — and we decided to do it with the help of machine learning. Plus, we used our ML model to come up […]

  • Organizational Update from OpenAI
    by OpenAI (OpenAI) on December 29, 2020

    It’s been a year of dramatic change and growth at OpenAI. In May, we introduced GPT-3—the most powerful language model to date—and soon afterward launched our first commercial product, an API to safely access artificial intelligence models using simple, natural-language prompts. We’re

  • Happy holidays from the Unity ML-Agents team!
    by Jeffrey Shih (Machine Learning – Unity Technologies Blog) on December 28, 2020

    On behalf of the Unity ML-Agents team, we want to wish everyone and their loved ones a happy holiday and new year!  As we close out 2020, we wanted to take a moment to highlight a few of our favorite community projects in 2020, recap our progress since our v1.0 release (Release 1) in April The post Happy holidays from the Unity ML-Agents team! appeared first on Unity Technologies Blog.

  • MuZero: Mastering Go, chess, shogi and Atari without rules
    on December 23, 2020

    Planning winning strategies in unknown environments is a step forward in the pursuit of general-purpose algorithms.

  • AI helps protect Australian wildlife in fire-affected areas
    by (AI) on December 15, 2020

    Editor’s note: Today's guest post comes from Darren Grover, Head of Healthy Land and Seascapes at the World Wide Fund For Nature Australia.Over the next six months, more than 600 sensor cameras will be deployed in bushfire-affected areas across Australia, monitoring and evaluating the surviving wildlife populations. This nationwide effort is part of An Eye on Recovery, a large-scale collaborative camera sensor project, run by the World Wide Fund for Nature (WWF) and […]

  • Automate your playtesting: Create Virtual Players for Game Simulation
    by Dylan Scandinaro (Machine Learning – Unity Technologies Blog) on December 11, 2020

    It’s easy to automate playtesting by creating a Virtual Player (a game-playing agent), then using Game Simulation to run automated playtests at scale. Read on to discover three case studies describing how iLLOGIKA, Furyion, and Ritz Deli created Virtual Players – offloading nearly 40,000 hours (~4.5 years) of automated playtesting to Game Simulation. Games are The post Automate your playtesting: Create Virtual Players for Game Simulation appeared first on Unity […]

  • Researchers can use qsim to explore quantum algorithms
    by (AI) on December 7, 2020

    A year ago, Google’s Quantum AI team achieved a beyond-classical computation by using a quantum computer to outperform the world’s fastest classical computer. With this, we entered a new era of quantum computing. We still have a long journey ahead of us to find practical applications, and we know we can’t get there alone. So today we’re launching qsim, a new open source quantum simulator that will help researchers develop quantum algorithms. The importance of […]

  • When newsrooms collaborate with AI
    by (AI) on December 7, 2020

    Two years ago, the Google News Initiative partnered with the London School of Economics and Political Science to launch JournalismAI, a global effort to foster media literacy in newsrooms through research, training and experimentation.  Since then, more than 62 thousand journalists have taken Introduction to Machine Learning, an online course provided in 17 languages in partnership with Belgian broadcaster VRT. More than 4,000 people have downloaded the JournalismAI […]

  • Using JAX to accelerate our research
    on December 4, 2020

    An introduction to our JAX ecosystem and why we find it useful for our AI research.

  • AlphaFold: a solution to a 50-year-old grand challenge in biology
    on November 30, 2020

    In a major scientific advance, AlphaFoldis recognised as a solution to the protein folding problem.

  • Real-time style transfer in Unity using deep neural networks
    by Thomas Deliot (Machine Learning – Unity Technologies Blog) on November 25, 2020

    Deep Learning is now powering numerous AI technologies in daily life, and convolutional neural networks (CNNs) can apply complex treatments to images at high speeds. At Unity, we aim to propose seamless integration of CNN inference in the 3D rendering pipeline. Unity Labs, therefore, works on improving state-of-the-art research and developing an efficient neural inference The post Real-time style transfer in Unity using deep neural networks appeared first on Unity […]

  • How Eidos-Montréal created Grid Sensors to improve observations for training agents
    by Jeffrey Shih (Machine Learning – Unity Technologies Blog) on November 20, 2020

    Within Eidos Labs, several projects use machine learning. The Automated Game Testing project tackles the problem of testing the functionality of expansive AAA games by modeling player behavior with agents that have learned behavior using reinforcement learning (RL).  In this blog post, we’ll describe how the team at Eidos Labs created the Grid Sensor within The post How Eidos-Montréal created Grid Sensors to improve observations for training agents appeared first on […]

  • Robotics simulation in Unity is as easy as 1, 2, 3!
    by Cameron Greene (Machine Learning – Unity Technologies Blog) on November 19, 2020

    Robot development workflows rely on simulation for testing and training, and we want to show you how roboticists can use Unity for robotics simulation. In this first blog post of a new series, we describe a common robotics development workflow. Plus, we introduce a new set of tools that make robotics simulation in Unity faster, The post Robotics simulation in Unity is as easy as 1, 2, 3! appeared first on Unity Technologies Blog.

  • How Metric Validation can help you finetune your game
    by Willis Kennedy (Machine Learning – Unity Technologies Blog) on November 13, 2020

    Over the past year, Unity Game Simulation has enabled developers to balance their games during development by running multiple playthroughs in parallel in the cloud. Today we are excited to share the next step in automating aspects of the balancing workflow by releasing Metric Validation, a precursor to our upcoming Optimization feature. In this blog The post How Metric Validation can help you finetune your game appeared first on Unity Technologies Blog.

  • 2020 AI@Unity interns shoutout
    by Andrew Cohen (Machine Learning – Unity Technologies Blog) on November 11, 2020

    Each summer, interns join AI@Unity to develop highly impactful technology that forwards our mission to empower Unity developers with Artificial Intelligence and Machine Learning tools and services. This past summer was no different, and the AI@Unity group was delighted to have 24 fantastic interns. This post will highlight the seven research and engineering interns from The post 2020 AI@Unity interns shoutout appeared first on Unity Technologies Blog.

  • Breaking down global barriers to access
    on November 5, 2020

    We're expanding our scholars programme to support more countries currently underrepresented in AI.

  • FermiNet: Quantum Physics and Chemistry from First Principles
    on October 19, 2020

    Weve developed a new neural network architecture, the Fermionic Neural Network or FermiNet, which is well-suited to modeling the quantum state of large collections of electrons, the fundamental building blocks of chemical bonds.

  • Fast reinforcement learning through the composition of behaviours
    on October 12, 2020

    The combination of reinforcement learning and deep learning has led to impressive results, such as agents that can learn how to play boardgames, the full spectrum of Atari games, as well as more modern, difficult video games like Dota and StarCraft II.

  • OpenAI Licenses GPT-3 Technology to Microsoft
    by OpenAI (OpenAI) on September 22, 2020

    OpenAI released its first commercial product back in June: an API for developers to access advanced technologies for building new applications and services. The API features a powerful general purpose language model, GPT-3, and has received tens of thousands of applications to date. In addition to offering GPT-3 and future

  • Training a performant object detection ML model on synthetic data using Unity computer vision tools
    by You-Cyuan Jhang (Machine Learning – Unity Technologies Blog) on September 17, 2020

    Supervised machine learning (ML) has revolutionized artificial intelligence and has led to the creation of numerous innovative products. However, with supervised machine learning, there is always a need for larger and more complex datasets, and collecting these datasets is costly. How can you be sure of the label quality? How do you ensure that the The post Training a performant object detection ML model on synthetic data using Unity computer vision tools appeared first on […]

  • Learning to Summarize with Human Feedback
    by OpenAI (OpenAI) on September 4, 2020

    We've applied reinforcement learning from human feedback to train language models that are better at summarization. Our models generate summaries that are better than summaries from 10x larger models trained only with supervised learning. Even though we train our models on the Reddit TL;DR dataset, the same

  • Traffic prediction with advanced Graph Neural Networks
    on September 3, 2020

    Working with our partners at Google Maps, we used advanced machine learning techniques including Graph Neural Networks, to improve the accuracy of real time ETAs by up to 50%.

  • The power of Unity in AI
    by Cameron Greene (Machine Learning – Unity Technologies Blog) on July 24, 2020

    Since 2018, Cross Compass has integrated Unity into the pipeline of several of its consulting services for the manufacturing field to train and validate AI algorithms safely before deployment. Read on to learn how this AI company came to use gaming technology to add value to such a mature industry. Cross Compass is a leading The post The power of Unity in AI appeared first on Unity Technologies Blog.

  • OpenAI Scholars 2020: Final Projects
    by OpenAI (OpenAI) on July 9, 2020

    Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the past five months.

  • Fiber: Distributed Computing for AI Made Simple
    by Jiale Zhi (Machine Learning – Uber Engineering Blog) on June 30, 2020

    Project Homepage: GitHub Over the past several years, increasing processing power of computing machines has led to an increase in machine learning advances. More and more, algorithms exploit parallelism and rely on distributed training to process an enormous amount of … The post Fiber: Distributed Computing for AI Made Simple appeared first on Uber Engineering Blog.

  • Applying for technical roles
    on June 23, 2020

    We answer the Women in Machine Learning community's questions about applying for a job in industry.

  • Image GPT
    by OpenAI (OpenAI) on June 17, 2020

    We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples.

  • Scaling Kubernetes Jobs for Unity Simulation
    by Marc Beitchman (Machine Learning – Unity Technologies Blog) on June 17, 2020

    Unity Simulation enables product developers, researchers, and engineers to smoothly and efficiently run thousands of instances of parameterized Unity builds in batch in the cloud. Unity Simulation allows you to parameterize a Unity project in ways that will change from run to run. You can also specify simulation output data necessary for your end application, The post Scaling Kubernetes Jobs for Unity Simulation appeared first on Unity Technologies Blog.

  • Profiles in Coding: Diana Yanakiev, Uber ATG, Pittsburgh
    by Bea Schuster (Machine Learning – Uber Engineering Blog) on June 16, 2020

    Self-driving cars have long been considered the future of transportation, but they’re becoming more present everyday. Uber ATG (Advanced Technologies Group) is at the forefront of this technology, helping bring safe, reliable self-driving vehicles to the streets. Of course, … The post Profiles in Coding: Diana Yanakiev, Uber ATG, Pittsburgh appeared first on Uber Engineering Blog.

  • OpenAI API
    by Greg Brockman (OpenAI) on June 11, 2020

    We’re releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can

  • Use Unity’s computer vision tools to generate and analyze synthetic data at scale to train your ML models
    by Adam Crespi (Machine Learning – Unity Technologies Blog) on June 10, 2020

    Synthetic data alleviates the challenge of acquiring labeled data needed to train machine learning models. In this post, the second in our blog series on synthetic data, we will introduce tools from Unity to generate and analyze synthetic datasets with an illustrative example of object detection. In our first blog post, we discussed the challenges The post Use Unity’s computer vision tools to generate and analyze synthetic data at scale to train your ML models appeared […]

  • Procgen and MineRL Competitions
    by OpenAI (OpenAI) on June 9, 2020

    We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMind, using Procgen Benchmark and MineRL. We rely heavily on these environments internally for research on reinforcement learning, and we look forward to seeing the progress the community makes in

  • Introducing Neuropod, Uber ATG’s Open Source Deep Learning Inference Engine
    by Vivek Panyam (Machine Learning – Uber Engineering Blog) on June 8, 2020

    At Uber Advanced Technologies Group (ATG), we leverage deep learning to provide safe and reliable self-driving technology. Using deep learning, we can build and train models to handle tasks such as processing sensor input, identifying objects, and predicting where … The post Introducing Neuropod, Uber ATG’s Open Source Deep Learning Inference Engine appeared first on Uber Engineering Blog.

  • Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learning
    by Steffon Davis (Machine Learning – Uber Engineering Blog) on June 2, 2020

    How did the pedestrian cross the road? Contrary to popular belief, sometimes the answer isn’t as simple as “to get to the other side.” To bring safe, reliable self-driving vehicles (SDVs) to the streets at Uber Advanced Technologies Group (ATG)… The post Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learning appeared first on Uber Engineering Blog.

  • Meta-Graph: Few-Shot Link Prediction Using Meta-Learning
    by Ankit Jain (Machine Learning – Uber Engineering Blog) on May 29, 2020

    This article is based on the paper “Meta-Graph: Few Shot Link Prediction via Meta Learning” by Joey Bose, Ankit Jain, Piero Molino, and William L. Hamilton Many real-world data sets are structured as graphs, and as such, machine … The post Meta-Graph: Few-Shot Link Prediction Using Meta-Learning appeared first on Uber Engineering Blog.

  • Using AI to predict retinal disease progression
    on May 18, 2020

    Vision loss among the elderly is a major healthcare issue: about one in three people have some vision-reducing disease by the age of 65. Age-related macular degeneration (AMD) is the most common cause of blindness in the developed world. In Europe, approximately 25% of those 60 and older have AMD. The dry form is relatively common among people over 65, and usually causes only mild sight loss. However, about 15% of patients with dry AMD go on to develop a more serious form of […]

  • AI and Efficiency
    by Danny Hernandez (OpenAI) on May 5, 2020

    We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train

  • Specification gaming: the flip side of AI ingenuity
    on April 21, 2020

    Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework […]

  • Towards understanding glasses with graph neural networks
    on April 6, 2020

    Under a microscope, a pane of window glass doesnt look like a collection of orderly molecules, as a crystal would, but rather a jumble with no discernable structure. Glass is made by starting with a glowing mixture of high-temperature melted sand and minerals. Once cooled, its viscosity (a measure of the friction in the fluid) increases a trillion-fold, and it becomes a solid, resisting tension from stretching or pulling. Yet the molecules in the glass remain in a seemingly […]

  • Agent57: Outperforming the human Atari benchmark
    on March 31, 2020

    The Atari57 suite of games is a long-standing benchmark to gauge agent performance across a wide range of tasks. Weve developed Agent57, the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games. Agent57 combines an algorithm for efficient exploration with a meta-controller that adapts the exploration and long vs. short-term behaviour of the agent.

  • Under the Hood of Uber ATG’s Machine Learning Infrastructure and Versioning Control Platform for Self-Driving Vehicles
    by Yu Guo (Machine Learning – Uber Engineering Blog) on March 4, 2020

    As Uber experienced exponential growth over the last few years, now supporting 14 million trips each day, our engineers proved they could build for scale. That value extends to other areas, including Uber ATG (Advanced Technologies Group) and its quest … The post Under the Hood of Uber ATG’s Machine Learning Infrastructure and Versioning Control Platform for Self-Driving Vehicles appeared first on Uber Engineering Blog.

  • Building a Backtesting Service to Measure Model Performance at Uber-scale
    by Sam Xiao (Machine Learning – Uber Engineering Blog) on February 13, 2020

    With operations in over 700 cities worldwide and gross bookings of over $16 billion in Q3 2019 alone, Uber leverages forecast models to ensure accurate financial planning and budget management. These models, derived from data science practices and platformed for … The post Building a Backtesting Service to Measure Model Performance at Uber-scale appeared first on Uber Engineering Blog.

  • A new model and dataset for long-range memory
    on February 10, 2020

    This blog introduces a new long-range memory model, the Compressive Transformer, alongside a new benchmark for book-level language modelling, PG19. We provide the conceptual tools needed to understand this new research in the context of recent developments in memory models and language modelling.

  • Women in Data Science at Uber: Moving the World With Data in 2020—and Beyond
    by Emily Bailey (Machine Learning – Uber Engineering Blog) on January 28, 2020

    Uber is a company built on data science. We leverage map data to get users from point A to point B; speech and text data to communicate between riders and drivers; restaurant and dish data to recommend food … The post Women in Data Science at Uber: Moving the World With Data in 2020—and Beyond appeared first on Uber Engineering Blog.

  • Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI
    on January 15, 2020

    A recent development in computer science may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain.

  • AlphaFold: Using AI for scientific discovery
    on January 15, 2020

    Our Nature paper describes AlphaFold, a system that generates3D models of proteins that are far more accurate than any that have come before.

  • Open Sourcing Manifold, a Visual Debugging Tool for Machine Learning
    by Lezhi Li (Machine Learning – Uber Engineering Blog) on January 7, 2020

    In January 2019, Uber introduced Manifold, a model-agnostic visual debugging tool for machine learning that we use to identify issues in our ML models. To give other ML practitioners the benefits of this tool, today we are excited to … The post Open Sourcing Manifold, a Visual Debugging Tool for Machine Learning appeared first on Uber Engineering Blog.

  • Uber Visualization Highlights: Displaying City Street Speed Clusters with SpeedsUp
    by Bryant Luong (Machine Learning – Uber Engineering Blog) on January 2, 2020

    Uber’s Data Visualization team builds software that enables us to better understand how cities move through dynamic visualizations. The Uber Engineering Blog periodically highlights visualizations that showcase how these technologies can turn aggregated data into actionable insights.   For SpeedsUp, … The post Uber Visualization Highlights: Displaying City Street Speed Clusters with SpeedsUp appeared first on Uber Engineering Blog.

  • Uber AI in 2019: Advancing Mobility with Artificial Intelligence
    by Zoubin Ghahramani (Machine Learning – Uber Engineering Blog) on December 18, 2019

    Artificial intelligence powers many of the technologies and services underpinning Uber’s platform, allowing engineering and data science teams to make informed decisions that help improve user experiences for products across our lines of business.  At the forefront of this effort … The post Uber AI in 2019: Advancing Mobility with Artificial Intelligence appeared first on Uber Engineering Blog.

  • Using WaveNet technology to reunite speech-impaired users with their original voices
    on December 18, 2019

    We demonstrate an early proof of concept of how text-to-speech technologies can synthesise a high-quality, natural sounding voice using minimal recorded speech data.

  • Learning human objectives by evaluating hypothetical behaviours
    on December 13, 2019

    We present a new method for training reinforcement learning agents from human feedback in the presence of unknown unsafe states.

  • Productionizing Distributed XGBoost to Train Deep Tree Models with Large Data Sets at Uber
    by Joseph Wang (Machine Learning – Uber Engineering Blog) on December 10, 2019

    Michelangelo, Uber’s machine learning (ML) platform, powers machine learning model training across various use cases at Uber, such as forecasting rider demand, fraud detection, food discovery and recommendation for Uber Eats, and improving the accuracy of … The post Productionizing Distributed XGBoost to Train Deep Tree Models with Large Data Sets at Uber appeared first on Uber Engineering Blog.

  • From unlikely start-up to major scientific organisation: Entering our tenth year at DeepMind
    on December 5, 2019

    Weve come a long way in building the organisation we need to achieve our long-term mission.

  • Announcing the 2020 Uber AI Residency
    by Ersin Yumer (Machine Learning – Uber Engineering Blog) on November 26, 2019

    Connecting the digital and physical worlds safely and reliably on the Uber platform presents exciting technological challenges and opportunities. For Uber, artificial intelligence (AI) is essential to developing systems that are capable of optimized, automated decision making at scale. AI … The post Announcing the 2020 Uber AI Residency appeared first on Uber Engineering Blog.

  • Strengthening the AI community
    on November 21, 2019

    AI requires people with different experiences, knowledge and backgrounds, which is why we started the DeepMind Scholarship programme and supportuniversitiesand the wider ecosystem.

  • Advanced machine learning helps Play Store users discover personalised apps
    on November 18, 2019

    In collaboration with Google Play,our team that leads on collaborations with Googlehas driven significant improvements in the Play Store's discovery systems, helping to deliver a more personalised and intuitive Play Store experience for users.

  • AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
    on October 30, 2019

    AlphaStar is the first AI to reach the top league of a widely popular esport without any game restrictions.

  • Evolving Michelangelo Model Representation for Flexibility at Scale
    by Anne Holler (Machine Learning – Uber Engineering Blog) on October 16, 2019

    Michelangelo, Uber’s machine learning (ML) platform, supports the training and serving of thousands of models in production across the company. Designed to cover the end-to-end ML workflow, the system currently supports classical machine learning, time series forecasting, and deep … The post Evolving Michelangelo Model Representation for Flexibility at Scale appeared first on Uber Engineering Blog.

  • Causal Bayesian Networks: A flexible tool to enable fairer machine learning
    on October 3, 2019

    Decisions based on machine learning (ML) are potentially advantageous over human decisions, but the data used to train them often contains human and societal biases that can lead to harmful decisions.

  • DeepMind’s health team joins Google Health
    on September 18, 2019

    Heres what the future looks like for the team.

  • Science at Uber: Improving Transportation with Artificial Intelligence
    by Wayne Cunningham (Machine Learning – Uber Engineering Blog) on September 17, 2019

    At Uber, we take advanced research work and use it to solve real world problems. In our  Science at Uber video series, Uber employees talk about how we apply data science, artificial intelligence, machine learning, and other innovative technologies … The post Science at Uber: Improving Transportation with Artificial Intelligence appeared first on Uber Engineering Blog.

  • Episode 8: Demis Hassabis - The interview
    on September 17, 2019

    In this special extended episode, Hannah meets Demis Hassabis, the CEO and co-founder of DeepMind.

  • Three Approaches to Scaling Machine Learning with Uber Seattle Engineering
    by Bea Schuster (Machine Learning – Uber Engineering Blog) on September 11, 2019

    Uber’s services require real-world coordination between a wide range of customers, including driver-partners, riders, restaurants, and eaters. Accurately forecasting things like rider demand and ETAs enables this coordination, which makes our services work as seamlessly as possible.  In an effort … The post Three Approaches to Scaling Machine Learning with Uber Seattle Engineering appeared first on Uber Engineering Blog.

  • Science at Uber: Powering Machine Learning at Uber
    by Wayne Cunningham (Machine Learning – Uber Engineering Blog) on September 10, 2019

    At Uber, we take advanced research work and use it to solve real world problems. In our  Science at Uber video series, Uber employees talk about how we apply data science, artificial intelligence, machine learning, and other innovative technologies … The post Science at Uber: Powering Machine Learning at Uber appeared first on Uber Engineering Blog.

  • Episode 7: Towards the future
    on September 10, 2019

    AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Hannah explores the journey to get there.

  • Replay in biological and artificial neural networks
    on September 6, 2019

    Our waking and sleeping lives are punctuated by fragments of recalled memories: a sudden connection in the shower between seemingly disparate thoughts, or an ill-fated choice decades ago that haunts us as we struggle to fall asleep.

  • Episode 6: AI for everyone
    on September 3, 2019

    Hannah investigates the more human side of the technology, some ethical issues around how it is developed and used, and the efforts to create a future of AI that works for everyone.

  • Episode 5: Out of the lab
    on August 27, 2019

    Hannah Fry meets the scientists building systems that could be used to save the sight of thousands; help us solve one of the most fundamental problems in biology, and reduce energy consumption in an effort to combat climate change.

  • Advancing AI: A Conversation with Jeff Clune, Senior Research Manager at Uber
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on August 21, 2019

    The past few months have been a whirlwind for Jeff Clune, Senior Research Manager at Uber and a founding member of Uber AI Labs. In June 2019, research by him and his collaborators on POET, an algorithm … The post Advancing AI: A Conversation with Jeff Clune, Senior Research Manager at Uber appeared first on Uber Engineering Blog.

  • Episode 4: AI, Robot
    on August 20, 2019

    Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMinds robotics laboratory, Hannah explores what researchers call embodied AI.

  • Episode 3: Life is like a game
    on August 19, 2019

    Video games have become a favourite tool for AI researchers to test the abilities of their systems. Why?

  • Episode 2: Go to Zero
    on August 18, 2019

    The story of AlphaGo, first computer program to defeat a professional human player at the game of Go, a milestone considered a decade ahead of its time.

  • Episode 1: AI and neuroscience - The virtuous circle
    on August 17, 2019

    What can the human brain teach us about AI? And what can AI teach us about our own intelligence?

  • Welcome to the DeepMind podcast
    on August 16, 2019

    This eight part series hosted by mathematician and broadcaster Hannah Fry aims to give listeners an inside look at the fascinating world of AI research and explores some of the questions and challenges the whole field is wrestling with today.

  • Using machine learning to accelerate ecological research
    on August 8, 2019

    The Serengeti is one of the last remaining sites in the world that hosts an intact community of large mammals. These animals roam over vast swaths of land, some migrating thousands of miles across multiple countries following seasonal rainfall. As human encroachment around the park becomes more intense, these species are forced to alter their behaviours in order to survive. Increasing agriculture, poaching, and climate abnormalities contribute to changes in animal behaviours […]

  • Using AI to give doctors a 48-hour head start on life-threatening illness
    on July 31, 2019

    Artificial intelligence can now predict one of the leading causes of avoidable patient harm up to two days before it happens, as demonstrated byour latest research published in Nature.

  • Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution
    by Alex Gajewski (Machine Learning – Uber Engineering Blog) on July 22, 2019

    Tools that enable fast and flexible experimentation democratize and accelerate machine learning research. Take for example the development of libraries for automatic differentiation, such as Theano, Caffe, TensorFlow, and PyTorch: these libraries have been instrumental in … The post Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution appeared first on Uber Engineering Blog.

  • Unsupervised learning: The curious pupil
    on June 25, 2019

    One in a series of posts explaining the theories underpinning our research. Over the last decade, machine learning has made unprecedented progress in areas as diverse as image recognition, self-driving cars and playing complex games like Go. These successes have been largely realised by training deep neural networks with one of two learning paradigmssupervised learning and reinforcement learning. Both paradigms require training signals to be designed by a human and passed to […]

  • Gaining Insights in a Simulated Marketplace with Machine Learning at Uber
    by Haoyang Chen (Machine Learning – Uber Engineering Blog) on June 24, 2019

    At Uber, we use marketplace algorithms to connect drivers and riders. Before the algorithms roll out globally, Uber fully tests and evaluates them to create an optimal user experience that maps to our core marketplace principles. To make product … The post Gaining Insights in a Simulated Marketplace with Machine Learning at Uber appeared first on Uber Engineering Blog.

  • No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on June 14, 2019

    Machine learning models perform a diversity of tasks at Uber, from improving our maps to streamlining chat communications and even preventing fraud. In addition to serving a variety of use cases, it is important that we make machine learning … The post No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox appeared first on Uber Engineering Blog.

  • Capture the Flag: the emergence of complex cooperative agents
    on May 30, 2019

    Mastering the strategy, tactical understanding, and team play involved in multiplayer video games represents a critical challenge for AI research. Now, through new developments in reinforcement learning, our agents have achieved human-level performance in Quake III Arena Capture the Flag, a complex multi-agent environment and one of the canonical 3D first-person multiplayer games. These agents demonstrate the ability to team up with both artificial agents and human players.

  • Improving Uber’s Mapping Accuracy with CatchME
    by Yuehai Xu (Machine Learning – Uber Engineering Blog) on April 25, 2019

    Reliable transportation requires a robust map stack that provides services like routing,  navigation instructions, and ETA calculation. Errors in map data can significantly impact services, leading to a suboptimal user experience. Uber engineers use various sources of feedback to identify … The post Improving Uber’s Mapping Accuracy with CatchME appeared first on Uber Engineering Blog.

  • Identifying and eliminating bugs in learned predictive models
    on March 28, 2019

    One in a series of posts explaining the theories underpinning our research. Bugs and software have gone hand in hand since the beginning of computer programming. Over time, software developers have established a set of best practices for testing and debugging before deployment, but these practices are not suited for modern deep learning systems. Today, the prevailing practice in machine learning is to train a system on a training data set, and then test it on another set. […]

  • Accessible Machine Learning through Data Workflow Management
    by Jianyong Zhang (Machine Learning – Uber Engineering Blog) on March 18, 2019

    Machine learning (ML) pervades many aspect of Uber’s business. From responding to customer support tickets, optimizing queries, and forecasting demand, ML provides critical insights for many of our teams. Our teams encountered many different challenges while incorporating … The post Accessible Machine Learning through Data Workflow Management appeared first on Uber Engineering Blog.

  • Data Science at Scale: A Conversation with Uber’s Fran Bell
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on March 13, 2019

    Fran Bell has always been a scientist; theorizing, modeling and testing how the world works. An ever-curious child, she was fascinated by the natural world, poring over biology and chemistry books, but was never satisfied with just knowing; she … The post Data Science at Scale: A Conversation with Uber’s Fran Bell appeared first on Uber Engineering Blog.

  • TF-Replicator: Distributed Machine Learning for Researchers
    on March 7, 2019

    At DeepMind, the Research Platform Team builds infrastructure to empower and accelerate our AI research. Today, we are excited to share how we developed TF-Replicator, a software library that helps researchers deploy their TensorFlow models on GPUs and Cloud TPUs with minimal effort and no previous experience with distributed systems. TF-Replicators programming model has now been open sourced as part of TensorFlows tf.distribute.Strategy. This blog post gives an overview of […]

  • Machine learning can boost the value of wind energy
    on February 26, 2019

    Carbon-free technologies like renewable energy help combat climate change, but many of them have not reached their full potential. Consider wind power: over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged. However, the variable nature of wind itself makes it an unpredictable energy sourceless useful than one that can reliably deliver power at a set time.In search of a […]

  • Uber Open Source: Catching Up with Fritz Obermeyer and Noah Goodman from the Pyro Team
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on February 21, 2019

    Over the past several years, artificial intelligence (AI) has become an integral component of many enterprise tech stacks, facilitating faster, more efficient solutions for everything from self-driving vehicles to automated messaging platforms. On the AI spectrum, deep probabilistic programming, a … The post Uber Open Source: Catching Up with Fritz Obermeyer and Noah Goodman from the Pyro Team appeared first on Uber Engineering Blog.

  • Introducing Ludwig, a Code-Free Deep Learning Toolbox
    by Piero Molino (Machine Learning – Uber Engineering Blog) on February 11, 2019

    Over the last decade, deep learning models have proven highly effective at performing a wide variety of machine learning tasks in vision, speech, and language. At Uber we are using these models for a variety of tasks, including customer support… The post Introducing Ludwig, a Code-Free Deep Learning Toolbox appeared first on Uber Engineering Blog.

  • AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
    on January 24, 2019

    Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all […]

  • Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber
    by Lezhi Li (Machine Learning – Uber Engineering Blog) on January 14, 2019

    Machine learning (ML) is widely used across the Uber platform to support intelligent decision making and forecasting for features such as ETA prediction and fraud detection. For optimal results, we invest a lot of resources in developing accurate predictive … The post Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber appeared first on Uber Engineering Blog.

  • POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
    by Rui Wang (Machine Learning – Uber Engineering Blog) on January 8, 2019

    Jeff Clune and Kenneth O. Stanley were co-senior authors. We are interested in open-endedness at Uber AI Labs because it offers the potential for generating a diverse and ever-expanding curriculum for machine learning entirely on its own. Having vast amounts … The post POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer appeared first on Uber Engineering Blog.

  • Open Source at Uber: Meet Alex Sergeev, Horovod Project Lead
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on December 13, 2018

    For Alex Sergeev, the decision to open source his team’s new distributed deep learning framework, Horovod, was an easy one. Tasked with training the machine learning models that power the sensing and perception systems used by our Advanced … The post Open Source at Uber: Meet Alex Sergeev, Horovod Project Lead appeared first on Uber Engineering Blog.

  • AlphaZero: Shedding new light on chess, shogi, and Go
    on December 6, 2018

    In late 2017 we introduced AlphaZero, a single system that taught itself from scratch how to master the games of chess, shogi (Japanese chess), and Go, beating a world-champion program in each case. We were excited by the preliminary results and thrilled to see the response from members of the chess community, who saw in AlphaZeros games a ground-breaking, highly dynamic and unconventional style of play that differed from any chess playing engine that came before it.Today, […]

  • AlphaFold: Using AI for scientific discovery
    on December 2, 2018

    Today were excited to share DeepMinds first significant milestone in demonstrating how artificial intelligence research can drive and accelerate new scientific discoveries. With a strongly interdisciplinary approach to our work, DeepMind has brought together experts from the fields of structural biology, physics, and machine learning to apply cutting-edge techniques to predict the 3D structure of a protein based solely on its genetic sequence.Our system, AlphaFold, which we […]

  • How to Get a Better GAN (Almost) for Free: Introducing the Metropolis-Hastings GAN
    by Ryan Turner (Machine Learning – Uber Engineering Blog) on November 29, 2018

    Generative Adversarial Networks (GANs) have achieved impressive feats in realistic image generation and image repair. Art produced by a GAN has even been sold at auction for over $400,000! At Uber, GANs have myriad potential applications, including strengthening our … The post How to Get a Better GAN (Almost) for Free: Introducing the Metropolis-Hastings GAN appeared first on Uber Engineering Blog.

  • Collaboration at Scale: Highlights from Uber Open Summit 2018
    by Wayne Cunningham (Machine Learning – Uber Engineering Blog) on November 20, 2018

    Uber held its first open source summit on November 15, 2018, inviting members of the open source community for presentations given by experts on some of the projects we have contributed in the fields of big data, visualization, machine learning, … The post Collaboration at Scale: Highlights from Uber Open Summit 2018 appeared first on Uber Engineering Blog.

  • Experience in AI: Uber Hires Jan Pedersen
    by Wayne Cunningham (Machine Learning – Uber Engineering Blog) on November 15, 2018

    Whenever a rider gets dropped off at their location, one of our driver-partners finishes a session laden with trips, or an eater gets food delivered to their door, data underlies these interactions on the Uber platform. And our teams could … The post Experience in AI: Uber Hires Jan Pedersen appeared first on Uber Engineering Blog.

  • NVIDIA: Accelerating Deep Learning with Uber’s Horovod
    by Molly Vorwerck (Machine Learning – Uber Engineering Blog) on November 14, 2018

    NVIDIA, inventor of the GPU, creates solutions for building and training AI-enabled systems. In addition to providing hardware and software for much of the industry’s AI research, NVIDIA is building an AI computing platform for developers of self-driving vehicles. With … The post NVIDIA: Accelerating Deep Learning with Uber’s Horovod appeared first on Uber Engineering Blog.

  • Scaling Streams with Google
    on November 13, 2018

    Were excited to announce that the team behind Streams our mobile app that supports doctors and nurses to deliver faster, better care to patientswill be joining Google.Its been a phenomenal journey to see Streams go from initial idea to live deployment, and to hear how its helped change the lives of patients and the nurses and doctors who treat them. The arrival of world-leading health expert Dr. David Feinberg at Google will accelerate these efforts, helping to make a […]

  • My Journey from Working as a Fabric Weaver in Ethiopia to Becoming a Software Engineer at Uber in San Francisco
    by Samuel Zemedkun (Machine Learning – Uber Engineering Blog) on November 12, 2018

    I was born in Addis Ababa, Ethiopia and was raised there with my five younger sisters. My father made traditional fabrics, weaving one thread at a time. Weaving in Ethiopia is a family business and every member of the family … The post My Journey from Working as a Fabric Weaver in Ethiopia to Becoming a Software Engineer at Uber in San Francisco appeared first on Uber Engineering Blog.

  • Predicting eye disease with Moorfields Eye Hospital
    on November 5, 2018

    In August, we announced the first stage of our joint research partnership with Moorfields Eye Hospital, which showed how AI could match world-leading doctors at recommending the correct course of treatment for over 50 eye diseases, and also explain how it arrives at its recommendations.Now were excited to start working on the next research challenge whether we can help clinicians predict eye diseases before symptoms set in.There are two types of age-related macular […]

  • Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Development
    by Kevin Stumpf (Machine Learning – Uber Engineering Blog) on October 23, 2018

    As a company heavily invested in AI, Uber aims to leverage machine learning (ML) in product development and the day-to-day management of our business. In pursuit of this goal, our data scientists spend considerable amounts of time prototyping and validating … The post Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Development appeared first on Uber Engineering Blog.

  • Applying Customer Feedback: How NLP & Deep Learning Improve Uber’s Maps
    by Chun-Chen Kuo (Machine Learning – Uber Engineering Blog) on October 22, 2018

    High quality map data powers many aspects of the Uber trip experience. Services such as Search, Routing, and Estimated Time of Arrival (ETA) prediction rely on accurate map data to provide a safe, convenient, and efficient experience for riders, drivers, … The post Applying Customer Feedback: How NLP & Deep Learning Improve Uber’s Maps appeared first on Uber Engineering Blog.

  • Open sourcing TRFL: a library of reinforcement learning building blocks
    on October 17, 2018

    Today we are open sourcing a new library of useful building blocks for writing reinforcement learning (RL) agents in TensorFlow. Named TRFL (pronounced truffle), it represents a collection of key algorithmic components that we have used internally for a large number of our most successful agents such as DQN, DDPG and the Importance Weighted Actor Learner Architecture.A typical deep reinforcement learning agent consists of a large number of interacting components: at the very […]

  • Expanding our research on breast cancer screening to Japan
    on October 4, 2018

    Japanese version followsSix months ago, we joined a groundbreaking new research partnership led by the Cancer Research UK Imperial Centre at Imperial College London to explore whether AI technology could help clinicians diagnose breast cancers on mammograms quicker and more effectively.Breast cancer is a huge global health problem. Around the world, over 1.6 million people are diagnosed with the disease every single year, and 500,000 lose their life to it partly because […]

  • Improving Driver Communication through One-Click Chat, Uber’s Smart Reply System
    by Yue Weng (Machine Learning – Uber Engineering Blog) on September 28, 2018

    Imagine standing curbside, waiting for your Uber ride to arrive. On your app, you see that the car is barely moving. You send them a message to find out what’s going on. Unbeknownst to you, your driver-partner is stuck in … The post Improving Driver Communication through One-Click Chat, Uber’s Smart Reply System appeared first on Uber Engineering Blog.

  • Introducing Petastorm: Uber ATG’s Data Access Library for Deep Learning
    by Robbie Gruener (Machine Learning – Uber Engineering Blog) on September 21, 2018

    In recent years, deep learning has taken a central role in solving a wide range of problems in pattern recognition. At Uber Advanced Technologies Group (ATG), we use deep learning to solve various problems in the autonomous driving space, since … The post Introducing Petastorm: Uber ATG’s Data Access Library for Deep Learning appeared first on Uber Engineering Blog.

  • Preserving Outputs Precisely while Adaptively Rescaling Targets
    on September 13, 2018

    Multi-task learning - allowing a single agent to learn how to solve many different tasks - is a longstanding objective for artificial intelligence research. Recently, there has been a lot of excellent progress, with agents likeDQN able to use the same algorithm to learn to play multiple games including Breakout and Pong. These algorithms were used to train individual expert agents for each task. As artificial intelligence research advances to more complex real world domains, […]

  • Using AI to plan head and neck cancer treatments
    on September 13, 2018

    Early results from our partnership with the Radiotherapy Department at University College London Hospitals NHS Foundation Trust suggest that we are well on our way to developing an artificial intelligence (AI) system that can analyse and segment medical scans of head and neck cancer to a similar standard as expert clinicians. This segmentation process is an essential but time-consuming step when planning radiotherapy treatment. The findingsalso show that our system can […]

  • Food Discovery with Uber Eats: Recommending for the Marketplace
    by Yuyan Wang (Machine Learning – Uber Engineering Blog) on September 10, 2018

    Even as we improve Uber Eats to better understand eaters’ intentions when they use search, there are times when eaters just don’t know what they want to eat. In those situations, the Uber Eats app provides a personalized experience for … The post Food Discovery with Uber Eats: Recommending for the Marketplace appeared first on Uber Engineering Blog.

  • Forecasting at Uber: An Introduction
    by Franziska Bell (Machine Learning – Uber Engineering Blog) on September 6, 2018

    This article is the first in a series dedicated to explaining how Uber leverages forecasting to build better products and services. In recent years, machine learning, deep learning, and probabilistic programming have shown great promise in generating accurate forecasts. In … The post Forecasting at Uber: An Introduction appeared first on Uber Engineering Blog.

  • Safety-first AI for autonomous data centre cooling and industrial control
    on August 17, 2018

    Many of societys most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Googles already highly-optimised data centres. Our thinking was simple: even minor improvements would provide significant […]

  • A major milestone for the treatment of eye disease
    on August 13, 2018

    We are delighted to announce the results of the first phase of our joint research partnership with Moorfields Eye Hospital, which could potentially transform the management of sight-threatening eye disease.The results, published online inNature Medicine(open access full text, see end of blog), show that our AI system can quickly interpret eye scans from routine clinical practice with unprecedented accuracy. It can correctly recommend how patients should be referred for […]

  • Objects that Sound
    on August 6, 2018

    Visual and audio events tend to occur together: a musician plucking guitar strings and the resulting melody; a wine glass shattering and the accompanying crash; the roar of a motorcycle as it accelerates. These visual and audio stimuli are concurrent because they share a common cause. Understanding the relationship between visual events and their associated sounds is a fundamental way that we make sense of the world around us.In Look, Listen, and Learn and Objects that Sound […]

  • Measuring abstract reasoning in neural networks
    on July 11, 2018

    Neural network-based models continue to achieve impressive results on longstanding machine learning problems, but establishing their capacity to reason about abstract concepts has proven difficult. Building on previous efforts to solve this important feature of general-purpose learning systems, our latest paper sets out an approach for measuring abstract reasoning in learning machines, and reveals some important insights about the nature of generalisation itself.

  • DeepMind papers at ICML 2018
    on July 9, 2018

    The 2018 International Conference on Machine Learning will take place in Stockholm, Sweden from 10-15 July.For those attending and planning the week ahead, we are sharing a schedule of DeepMind presentations at ICML (you can download a pdf version here). We look forward to the many engaging discussions, ideas, and collaborations that are sure to arise from the conference!Efficient Neural Audio SynthesisAuthors: Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Nouri, Norman […]

  • DeepMind Health Response to Independent Reviewers' Report 2018
    on June 15, 2018

    When we set up DeepMind Health we believed that pioneering technology should be matched with pioneering oversight. Thats why when we launched in February 2016, we did so with an unusual and additional mechanism: a panel of Independent Reviewers, who meet regularly throughout the year to scrutinise our work. This is an innovative approach within tech companies - one that forces us to question not only what we are doing, but how and why we are doing it - and we believe that […]

  • Neural scene representation and rendering
    on June 14, 2018

    There is more than meets the eye when it comes to how we understand a visual scene: our brains draw on prior knowledge to reason and to make inferences that go far beyond the patterns of light that hit our retinas. For example, when entering a room for the first time, you instantly recognise the items it contains and where they are positioned. If you see three legs of a table, you will infer that there is probably a fourth leg with the same shape and colour hidden from view. […]

  • Royal Free London publishes findings of legal audit in use of Streams
    on June 13, 2018

    Last July, the Information Commissioner concluded an investigation into the use of the Streams app at the Royal Free London NHS Foundation Trust. As part of the investigation the Royal Free signed up to a set of undertakings one of which was to commission a third party to audit the Royal Frees current data processing arrangements with DeepMind, to ensure that they fully complied with data protection law and respected the privacy and confidentiality rights of its […]

  • Prefrontal cortex as a meta-reinforcement learning system
    on May 14, 2018

    Recently, AI systems have mastered a range of video-games such as Atari classics Breakout and Pong. But as impressive as this performance is, AI still relies on the equivalent of thousands of hours of gameplay to reach and surpass the performance of human video game players. In contrast, we can usually grasp the basics of a video game we have never played before in a matter of minutes.The question of why the brain is able to do so much more with so much less has given rise […]

  • Navigating with grid-like representations in artificial agents
    on May 9, 2018

    Most animals, including humans, are able to flexibly navigate the world they live in exploring new areas, returning quickly to remembered places, and taking shortcuts. Indeed, these abilities feel so easy and natural that it is not immediately obvious how complex the underlying processes really are. In contrast, spatial navigation remains a substantial challenge for artificial agents whose abilities are far outstripped by those of mammals.In 2005, a potentially crucial part […]

  • DeepMind, meet Android
    on May 8, 2018

    Were delighted to announce a new collaboration between DeepMind for Google and Android, the worlds most popular mobile operating system. Together, weve created two new features that will be available to people with devices running Android P later this year:

  • DeepMind papers at ICLR 2018
    on April 26, 2018

    Between 30 April and 03 May, hundreds of researchers and engineers will gather in Vancouver, Canada, for the Sixth International Conference on Learning RepresentationsHere you can read details of all DeepMinds accepted papers and find out where you can see the accompanying poster sessions and talks. Maximum a posteriori policy optimisationAuthors: Abbas Abdolmaleki, Jost Tobias Springenberg, Nicolas Heess, Yuval Tassa, Remi MunosWe introduce a new algorithm for reinforcement […]

  • Our first COO Lila Ibrahim takes DeepMind to the next level
    on April 11, 2018

    One of the greatest pleasures of coming to work every day at DeepMind is the chance to collaborate with brilliant researchers and engineers from so many different fields and perspectives - with machine learning experts alongside neuroscientists, physicists, mathematicians, roboticists, ethicists and more.This level of interdisciplinary collaboration is both challenging and unusual, and it requires a unique type of organisation. We built DeepMind to combine the rigour and […]

  • Learning to navigate in cities without a map
    on March 29, 2018

    How did you learn to navigate the neighborhood of your childhood, to go to a friends house, to your school or to the grocery store? Probably without a map and simply by remembering the visual appearance of streets and turns along the way. As you gradually explored your neighborhood, you grew more confident, mastered your whereabouts and learned new and increasingly complex paths. You may have gotten briefly lost, but found your way again thanks to landmarks, or perhaps even […]

  • Retour à Paris / A return to Paris
    on March 29, 2018

    English version followsLorsque nous avons tabli notre sige Londres en 2010, nous voulions faire de DeepMind le nec plus ultra de la recherche de pointe dans le domaine de lintelligence artificielle. Nous voulions galement aider la communaut de lintelligence artificielle se dvelopper. Nous avons ainsi publi des articles dans les confrences et journaux les plus slectifs (plus de 180 ce jour!) et partag nos connaissances dans ce domaine; nous avons incit nos experts […]

  • Learning to write programs that generate images
    on March 27, 2018

    Through a humans eyes, the world is much more than just the images reflected in our corneas. For example, when we look at a building and admire the intricacies of its design, we can appreciate the craftsmanship it requires. This ability to interpret objects through the tools that created them gives us a richer understanding of the world and is an important aspect of our intelligence.We would like our systems to create similarly rich representations of the world. For example, […]

  • Understanding deep learning through neuron deletion
    on March 21, 2018

    Deep neural networks are composed of many individual neurons, which combine in complex and counterintuitive ways to solve a wide range of challenging tasks. This complexity grants neural networks their power but also earns them their reputation as confusing and opaque black boxes.Understanding how deep neural networks function is critical for explaining their decisions and enabling us to build more powerful systems. For instance, imagine the difficulty of trying to build a […]

  • Stop, look and listen to the people you want to help
    on March 6, 2018

    I like to take things slow. Take it slowly and get it right first time, one participant said, but was quickly countered by someone else around the table: But Im impatient, I want to see the benefits now. This exchange neatly captures many of the conversations I heard at DeepMind Healths recent Collaborative Listening Summit. It also represents, in laymans terms, the debate that tech thinkers and policy-makers are having right now about the future of artificial […]

  • Learning by playing
    on February 28, 2018

    Getting children (and adults) to tidy up after themselves can be a challenge, but we face an even greater challenge trying to get our AI agents to do the same. Success depends on the mastery of several core visuo-motor skills: approaching an object, grasping and lifting it, opening a box and putting things inside of it. To make matters more complicated, these skills must be applied in the right sequence.Control tasks, like tidying up a table or stacking objects, require an […]

  • Researching patient deterioration with the US Department of Veterans Affairs
    on February 22, 2018

    Were excited to announce a medical research partnership with the US Department of Veterans Affairs (VA), one of the worlds leading healthcare organisations responsible for providing high-quality care to veterans and their families across the United States.This project will see us analyse patterns from historical, depersonalised medical records to predict patient deterioration.Patient deterioration is a significant global health problem that often has fatal consequences. […]

  • Scalable agent architecture for distributed training
    on February 5, 2018

    Deep Reinforcement Learning (DeepRL) has achieved remarkable success in a range of tasks, from continuous control problems in robotics to playing games like Go and Atari. The improvements seen in these domains have so far been limited to individual tasks where a separate agent has been tuned and trained for each task.In our most recent work, we explore the challenge of training a single agent on many tasks.Today we are releasing DMLab-30, a set of new tasks that span a large […]

  • Learning explanatory rules from noisy data
    on January 29, 2018

    Suppose you are playing football. The ball arrives at your feet, and you decide to pass it to the unmarked striker. What seems like one simple action requires two different kinds of thought.First, you recognise that there is a football at your feet. This recognition requires intuitive perceptual thinking -you cannot easily articulate how you come to know that there is a ball at your feet, you just see that it is there. Second, you decide to pass the ball to a particular […]

  • Open-sourcing Psychlab
    on January 26, 2018

    Consider the simple task of going shopping for your groceries. If you fail to pick-up an item that is on your list, what does it tell us about the functioning of your brain? It might indicate that you have difficulty shifting your attention from object to object while searching for the item on your list. It might indicate a difficulty with remembering the grocery list. Or it could it be something to do with executing both skills simultaneously.

  • Game-theory insights into asymmetric multi-agent games
    on January 17, 2018

    As AI systems start to play an increasing role in the real world it is important to understand how different systems will interact with one another.In our latest paper, published in the journal Scientific Reports, we use a branch of game theory to shed light on this problem. In particular, we examine how two intelligent systems behave and respond in a particular type of situation known as an asymmetric game, which include Leduc poker and various board games such as Scotland […]

  • 2017: DeepMind's year in review
    on December 21, 2017

    In July, the world number one Go player Ke Jie spoke after a streak of 20 wins. It was two months after he had played AlphaGo at the Future of Go Summit in Wuzhen, China.After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly, he said. I hope all Go players can contemplate AlphaGos understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that […]

  • Collaborating with patients for better outcomes
    on December 19, 2017

    Working as a doctor in the NHS for over 10 years, I felt that I had developed good understanding of how patients and their families felt when faced with an upsetting diagnosis or important health decision. I had been lucky with my own health, having only spent one night in hospital for what ended up being a false alarm. But when my son was born prematurely two years ago, I had a glimpse into what being on the other side feels like - an experience that has profoundly shaped […]

  • DeepMind papers at NIPS 2017
    on December 1, 2017

    Between 04-09 December, thousands of researchers and experts will gather for the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS) in Long Beach, California.Here you will find an overview of the papers DeepMind researchers will present.

  • Why doesn't Streams use AI?
    on November 29, 2017

    One of the questions Im most often asked about Streams, our secure mobile healthcare app, is why is DeepMind making something that doesnt use artificial intelligence?Its a fair question to ask of an artificial intelligence (AI) company. When we first started thinking about working in healthcare, our natural focus was on AI and how it could be used to help the NHS and its patients. We see huge potential for AI to revolutionise our understanding of diseases - how they develop […]

  • Specifying AI safety problems in simple environments
    on November 28, 2017

    As AI systems become more general and more useful in the real world, ensuring they behave safely will become even more important. To date, the majority of technical AI safety research has focused on developing a theoretical understanding about the nature and causes of unsafe behaviour. Our new paper builds on a recent shift towards empirical testing (see Concrete Problems in AI Safety) and introduces a selection of simple reinforcement learning environments designed […]

  • Population based training of neural networks
    on November 27, 2017

    Neural networks have shown great success in everything from playing Go and Atari games to image recognition and language translation. But often overlooked is that the success of a neural network at a particular application is often determined by a series of choices made at the start of the research, including what type of network to use and the data and method used to train it. Currently, these choices - known as hyperparameters - are chosen through experience, random search […]