The fact that even AI experts sometimes have trouble training new models implies that the process has yet to be automated in a way where it could be incorporated into a general-purpose product. Some of the biggest advances in deep learning will come through discovering more robust training methods. We have already seen this some with advances like dropout, super convergence, and transfer learning, all of which make training easier. Through the power of transfer learning (to be discussed in Part 3) training can be a robust process when defined for a narrow enough problem domain; however, we still have a ways to go in making training more robust in general.
Companies of all sizes are implementing AI, ML, and cognitive technology projects for a wide range of reasons in a disparate array of industries and customer sectors. Some AI efforts are focused on the development of intelligent devices and vehicles, which incorporate three simultaneous development streams of software, hardware, and constantly evolving machine learning models. Other efforts are internally-focused enterprise predictive analytics, fraud management, or other process-oriented activities that aim to provide an additional layer of insight or automation on top of existing data and tooling. Yet other initiatives are focused on conversational interfaces that are distributed across an array of devices and systems. And others have AI & ML project development goals for public or private sector applications that differ in more significant ways than these.
Privacy and tech experts say that governments must be agile in creating laws to protect their citizens from ethically dubious applications of artificial intelligence (AI).
Meanwhile the acquisition by Facebook, no matter what form it takes, looks like a good fit given the U.S. company’s investment in next generation platforms, including VR and AR. It is also another — perhaps, worrying — example of U.S. tech companies hoovering up U.K. machine learning and AI talent early.
Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign.
This is not to say that no startups are working to commercialize this technology. Last year, CureMetrix became the first company to receive FDA approval for its AI-based breast cancer technology; the company plans to deploy in several clinical settings this year. Other startups angling to commercialize and scale AI-based radiology in the near term include Arterys, Aidoc, Zebra Technologies and DeepHealth.
Cofactor is a large, structured listing of people, places, and things. Here you can find the description of each topic.