Real-time Analytics News Roundup for Week Ending August 29

PinIt

Deloitte announces its Trustworthy AI framework, the NSF establishes new AI institutes, the Department of Energy funds five quantum information science research institutes, and more.

Keeping pace with news and developments in the real-time analytics market can be a daunting task. We want to help by providing a summary of some of the items our staff came across each week. Here is a short list of some news from this week:

The Deloitte AI Institute announced its Trustworthy AI framework to help companies proactively address AI ethics and integrity. The framework aims to guide organizations on how to apply AI responsibly and ethically within their businesses.

Deloitte’s Trustworthy AI framework introduces six dimensions for organizations to consider when designing, developing, deploying, and operating AI systems. The framework helps manage common risks and challenges related to AI ethics and governance, including:

  1. Fair and impartial use checks: A common concern about AI is how to circumvent bias introduced by humans during coding processes. To avoid this, companies need to determine what constitutes fairness and actively identify biases within their algorithms and data and implement controls to avoid unexpected outcomes.
  2. Implementing transparency and explainable AI: For AI to be trustworthy, all participants have a right to understand how their data is being used and how the AI system is making decisions. Organizations should be prepared to build algorithms, attributes, and correlations open to inspection.
  3. Responsibility and accountability: Trustworthy AI systems need to include policies that clearly establish who is responsible and accountable for their output. This issue epitomizes the uncharted aspect of AI: Is it the responsibility of the developer, tester, or product manager? Is it the machine learning engineer, who understands the inner workings? Or does the ultimate responsibility go higher up the ladder — to the CIO or CEO, who might have to testify before a government body?
  4. Putting proper security in place: To be trustworthy, AI must be protected from risks, including cybersecurity risks, that could lead to physical and/or digital harm. Companies need to thoroughly consider and address all kinds of risks and then communicate those risks to users.
  5. Monitoring for reliability: For AI to achieve widespread adoption, it must be as robust and reliable as the traditional systems, processes, and people it is augmenting. Companies need to ensure their AI algorithms produce the expected results for each new data set. They also need established processes for handling issues and inconsistencies if they arise.
  6. Safeguarding privacy: Trustworthy AI must comply with data regulations and only use data for its stated and agreed-upon purposes. Organizations should ensure that consumer privacy is respected, customer data is not leveraged beyond its intended and stated use, and consumers can opt-in and out of sharing their data.

The U.S. National Science Foundation (NSF) is establishing new artificial intelligence institutes to accelerate research, expand America’s workforce, and transform society in the decades to come. With an investment of over $100 million over the next five years, NSF’s Artificial Intelligence Institutes represent the nation’s most significant federal investment in AI research and workforce development to date. The $20 million investment in each of five NSF AI institutes is just the beginning, with more institute announcements anticipated in the coming years.

The White House Office of Science and Technology Policy, the NSF, and the U.S. Department of Energy (DOE) announced over $1 billion for the establishment of 12 new artificial intelligence (AI) and quantum information science (QIS) research institutes nationwide. The $1 billion will go towards seven NSF-led AI Research Institutes and five DOE QIS Research Centers over five years, establishing a total of 12 multi-disciplinary and multi-institutional national hubs for research and workforce development in these critical emerging technologies. Together, the institutes will spur cutting edge innovation, support regional economic growth, and advance American leadership in these critical industries of the future.

ThoughtSpot announced DataFlow, a new feature that makes it faster and easier than ever for companies to load data into Falcon, the company’s in-memory database. DataFlow significantly reduces the amount of technical resources needed to deploy ThoughtSpot while accelerating data access and analysis. Users simply connect to their data source, preview, and select which data they want to move into ThoughtSpot through a point and click UX that requires no coding, schedule their sync, then immediately start searching for insights. DataFlow supports dozens of the most common databases, data warehouses, file sources, and applications, and can scale to handle modern data volumes without any instability.

Appen has partnered with the World Economic Forum to design and release standards and best practices for responsible training data when building machine learning and artificial intelligence applications. As a World Economic Forum Associate Partner, Appen will collaborate with industry leaders to release the new standards within the “Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning” platform, which enables a global footprint and guidepost for responsible training data collection and creation across countries and industries. The standards and best practices for responsible training data aim to improve quality, efficiency, transparency, and responsibility for AI projects while promoting inclusivity and collaboration. The adoption of these standards by the larger technology community will increase the value of – and trust in – the use of AI by businesses and the general public.

If your company has real-time analytics news, send your announcements to [email protected].

In case you missed it, here are our most recent previous weekly real-time analytics news roundups:

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *