By John P. Desmond, AI Trends Editor
At its virtual AI Summit held in December, IBM announced updates across the Watson family of products in areas of language, explainability and workplace automation. These included an effort to commercialize AI FactSheets developed by IBM Research, which were first proposed in a paper published in 2018.
The FactSheets will answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining, according to an account in VentureBeat.
Specific FactSheets will offer:
- Policy Creation: FactSheet Templates define what information is collected about models and tracked, the model facts (such as how an AI service was created, tested, trained, deployed, and evaluated), data use, what regulations or company policies an organization is accounting for, who can use the model and for what purpose, and how it should operate.
- Automated Reporting: The FactSheet provides a sharable resource that offers knowledge about the model in a range of formats, depending on the preferences of different team members. It tracks the facts as the model is built, updated, and running in production, providing up-to-date insights.
“Like nutrition labels for foods or information sheets for appliances, factsheets for AI services would provide information about the product’s important characteristics,” stated Aleksandra Mojsilovic, head of AI foundations at IBM Research and an architect of AI FactSheets, in an interview with VentureBeat. “The issue of trust in AI is top of mind for IBM and many other technology developers and providers. AI-powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks. These issues must be addressed in order for AI services to be trusted.”
IBM also unveiled a new feature called Reading Comprehension that provides answers from databases of enterprise documents in response to natural language questions, assigning a confidence score to each response. Reading Comprehension is currently in beta in IBM’s AI-powered search service Watson Discovery.
KOY-Law Intelligence of Brazil was founded to offer a legal management platform for law firms, according to a use case on the IBM website. Built with IBM Watson, the platform recognizes and classifies lawsuits, schedules legal actions and tracks cases as they move through the system. Brazil has over 100 million lawsuits on the dockets in its multiple court systems,” stated Karla Capela Morais, CEO and Founder of KOY.
“Lawyers needed to be able to quickly and accurately examine massive amounts of data. I recognized that IBM Watson could leverage natural language processing by reading and understanding all of that casework,” she stated. Clients using the system have substantially reduced the number of hours they need to spend researching cases, she said.
Working with AI technology has presented opportunities for the firm and its clients. “We had to learn how far the technology would extend to leverage our business. Working with IBM, we came to see that AI won’t take people’s jobs. It will instead let us do what we do best—setting our hands free, giving us time to use our brains and allowing us to be creative!” Morais stated.
Mojsilovic: an IBM Fellow and Holder of 16 Patents
The creator of FactSheets, Mojsilovic, has worked for IBM for over 20 years and has touched many areas of AI. She is an IBM Fellow, a scientist at the Watson Research Center in Yorktown Heights, New York, where she currently leads the Foundations of Trusted AI organization. She is also a founder and co-director of the IBM Social Good Fellowship program.
In addition, her research interests include multi-dimensional signal processing, predictive modeling, machine learning and pattern recognition. She is the author of over 100 publications and holds 16 patents.
Her original paper on AI FactSheets published in 2018 touched on topics including Trust in AI Services, which listed “pillars of trusted AI,” which included:
- Fairness: AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups.
- Robustness: AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.
- Explainability: AI systems should provide decisions or suggestions that can be understood by their users and developers, and
- Lineage: AI systems should include details of their development, deployment, and maintenance, so they can be audited throughout their lifecycle.
“Just like a physical structure, trust can’t be built on one pillar alone. If an AI system is fair but can’t resist attack, it won’t be trusted. If it’s secure, but we can’t understand its output, it won’t be trusted. To build AI systems that are truly trusted, we need to strengthen all the pillars together,” states Mojsilovic in the paper.