By AI Trends Staff
The US government has woken up to the importance of AI, the work of AI scientists is paying off, and the investment community is supporting the industry. AI entrepreneurs and suppliers of AI-related products and services are staring at a market poised for dramatic growth.
Perspective is called for. “What we need to be more concerned about is the thoughtfulness around everything that we do,” suggested Joanne Lo, PhD, the CEO of Attica AI, speaking on data governance challenges at the opening morning of the 2nd Annual AI World Government conference and expo held virtually last week. Attica said her company designs systems tools for the DoD and first responders.
She spends time with clients assessing the foundation in place that new AI systems will be built to support. “We tell the client to think about the foundation, to clean up the data before we can build something new on top of it, so it does not crumble,” she said.
Her company also encourages clients to think beyond making applications work faster, on bigger computers and with faster chips. “We are working on those, and they should be done, but we say you need to think about what you as a human have to offer. What is the human 2.0 you can become,” she said.
Derry Goberdhansingh, CEO of Harper Paige, a technology services company with clients in the federal government and commercial business sectors, has confronted the fear that AI may replace someone’s job who is being asked to help develop the system. “I talk to folks about where the human part ends and the machine part begins. The human part will not end for some time,” he said during a roundtable on how AI is changing government work.
On the infrastructure side, he works with government clients on setting up an architecture, sometimes involving the cloud, to bring in the data needed to implement the machine learning model. “It is a labor-intensive process,” he said of the discussions.
At the outset of a project, he recommends the client define what improvement is needed and what outcome is sought. “Once you identify the outcome you want, you can break the problem down into chunks,” he said. AI project participants new to the field, he suggested, can learn a lot by reaching out to the community via the CTO or CIO’s office. “They may have an AI pilot going on. You can have a conversation,” he said. “I have connected government folks to other government folks working on projects. It’s amazing when you see what they are accomplishing.”
Technology is not the first topic of discussion. “Technology is the last leg,” said. Goberdhansingh. “Even though this is a conversation about AI, the last thing I want to talk about is the Python code. I want to talk about the business argument.”
Thresher Analyzes Unstructured Text
Rebecca Fair, CEO and Cofounder of Thresher, a startup which analyzes unstructured text to create training data for machine learning models, compared training an ML model to training a toddler. “We give them examples and eventually they can distinguish on their own,” she said during the roundtable on how AI and government work. For AI projects, she said subject matter experts need to be on the team to help keep the focus practical.
Thresher’s QuickCode product is used to quickly classify text documents, such as customer comments or technical notes, and suggest keywords helpful in building precise queries. The training data needs to have many examples, such as regional differences in speech, whether the content is domain specific, or whether the speaker is using code words to disguise intent. For example, “We have seen in Chinese social media, citizens use code words to evade online censors,” she said. “We need to know the context of how the language is being used.”
Asked by moderator Amy Loomis, research director on the Future of Work with IDC, what advice she would have for young people interested in a career in AI, Fair recommended candidates be well-rounded in math, including basic statistics. “For non-computer scientists, most of what we do in AI is grounded in statistics,” she said.
Microsoft Offering AI Capability Based on Azure Cloud Computing Service
Microsoft has invested heavily in AI and is working with a number of federal agencies on AI projects including the CDC, the Veterans Administration and the Department of Agriculture, said Susie Adams, CTO of Microsoft Federal. And while some may fear that AI will be taking over, “We believe AI is here to augment human intelligence and not replace it as sci-fi movies may suggest,” she said.
Microsoft’s investments in Azure for cloud computing have positioned it for growth. “What is fueling AI today and bringing it into the mainstream, comes down to the ability to host data and have the computational power to operate on it,” Adams said. “The cloud has led to a democratization of AI and has allowed for the distributed computing of AI models.”
Microsoft’s Flight Simulator 2020 video game has “AI infused into the game” and offers lessons for business. “A digital twin of the world is what we have now,” she said. “If we can make a digital twin of anything and generate synthetic data on that thing, we can simulate anything,” she said.
Working with the US Department of Agriculture, Microsoft is helping to build a system that takes readings from sensors in the farmer’s field, sends them to the cloud and then generates specific recommendations to farmers. Using the same image recognition technology as facial recognition, image sensors can, for example, observe when plants are stressed from drought.
“The goal of precision farming is to get information about crop care to the farmer in the field as fast as possible,” she said. Microsoft uses “whitespace frequency” allocated to a broadcasting service but not used locally to communicate the data from the farmers and fields to the clouds.
“The edge is a game changer here too,” Adams said. Azure IoT Edge is a full managed-service that connects the cloud to a spacecraft, such as a satellite. Azure Orbital is a ground station as-a-service that provides command and control of the client’s satellite. Microsoft Turing is a supercomputer that works on large-scale AI models, for use primarily to address business problems across Microsoft. “These breakthroughs are not theoretical,” Adams said.
HP Taking Engineering View of Deploying ML Applications at Scale
The engineers at Hewlett Packard Enterprise speak more of the language of software engineering familiar to enterprise IT types. In a talk entitled, “Deploying and Managing Machine Learning Applications at Scale,” Glyn Bowden, CTO with HP Enterprise, outlined steps required to create and deploy an AI “solution.”
First is model training, which requires data engineering work to prepare the models and validate the data, to remove bias and other flaws. A code repository is needed for the functional elements; a container is needed to hold the UI and binary artifacts needed around the application code. “The challenges of model creation include how to accelerate model training at scale, requiring thousands of iterations?” he queried. Even with automated machine learning, many iterations are needed to train and assess the data, and “You need clean access to the best data for the job.”
The track record of putting AI into production, what Gartner has called “the last mile,” is not so good. Gartner’s surveys have found that over 60% of AI models developed with the intent of being “operationalized,” have never made it into production. Bowden said 80% to 85% of enterprises are running into the last mile production issues.
Examples of businesses that developed AI systems that are working include: the Mercedes-AMG Petronas Formula One Team, which is using machine learning in the pits in an effort to shave milliseconds off race time; and Texmark Chemicals, with an AI system focused on worker safety and facility condition monitoring using predictive maintenance and video analytics on the factory floor.
To bridge the two domains of creation and production, all the components and their relationships need to be tracked, in part to enable recovery from a failure. “The more we start depending on AI and machine learning applications, the more we need to build resilience into the system itself,” Bowden said.
The HPE Ezmeral Container Platform is a Kubernetes-based software platform for deploying and managing containerized enterprise applications at scale. It is intended for use by data engineering, data scientists and machine learning architects to bridge the two domains. (Kubernetes is an open source container orchestration system for automating computer application deployment and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.) The HPE Ezmeral Container Platform and HPE Ezmeral ML Ops are to be available as cloud services through HPE Green Lake, meant to provide cloud services to data centers based on a subscription fee.
SRI Tackling ‘Cyber-Physical World,’ Using ML for Complex Work
SRI International, the nonprofit scientific research institute now offering products and services incorporating AI, is today concentrating on ‘cyber-physical systems’ (CPS) that control or monitor a mechanism based on computer -based algorithms. “Cyber-physical systems are now starting to incorporate machine learning to tackle complex physical work,” said Manish Kothar, president of SRI International.
Examples of CPS include the smart grid, autonomous automobiles, medical monitoring, industrial control systems, robotic systems and automatic pilot avionics. “They are almost always networked, distributed, adaptive, predictive and often need to work in real time,” he said.
Digital twins, virtual simulations that can perfectly model a physical system, “are nearly impossible to achieve,” because sensors cannot be put everywhere and not everything can be modeled, he said. Instead, “We can automate a small part of creation, and 90% of the task is still run by humans.”
Software company Drishti Technologies is working with SRI to achieve productivity improvements through digital observations of physical tasks which are then optimized. With its cameras filming workers on an assembly line, the system is said to generate streams of data that enables continuous improvement of the human performance.
Effective human-AI collaboration, SRI has found, rests on five principles; transparency, directability, communication, personalization and competency, said Kothar, crediting Dr. Karen Myers, program director and principal scientist in SRI’s AI Center, for the work.
The Siri virtual assistant, created at SRI and licensed to Apple, responds to verbal cues. Beyond that, “Understanding non-verbal cues is a critical element of how humans in the loop can interact with the system,” Kothar said, indicating an area of research for SRI.
These presentations and more are still available at AI World Government virtual event.