Next steps on trustworthy AI: transparency, bias and better data governance

Over the last few years Mozilla has turned its attention to AI asking how can we make the data driven technologies we all use everyday more trustworthy How can we make things like social networks home assistants and search engines both more helpful and less harmful in the era ahead.

In 2021 we will take a next step with this work by digging deeper in three areas where we think we can make real progress transparency bias and better data governance. While these may feel like big abstract concepts at first glance all three are at the heart of problems we hear about everyday in the news problems that are top of mind not just in tech circles but also amongst policy makers business leaders and the public at large.

Think about this we know that social networks are driving misinformation and political divisions around the world. And there is growing consensus that we urgently need to do something to fix this. Yet we cant easily see inside we cant scrutinize the AI that drives these platforms making genuine fixes and real accountability impossible. Researchers policy makers and developers need to be able to see how these systems work transparency if were going to tackle this issue.


Read Full Post

News Link:
RSS Link:

Linux Chatter is a news aggregator service that curates some of the best Linux, Cloud, Technical Guides, Hardware and Security news. We display just enough content from the original post to spark your interest. If you like the topic, then click on the 'read full post' button to visit the author's website. Use Linux Chatter to find content from amazing authors!

Note: The content provided has been modified and is not displayed as intended by the author. Any trademarks, copyrights and rights remain with the source.

Disclaimer: Linux Chatter sources content from RSS feeds and personal content submissions. The views and opinions expressed in these articles are those of the authors and do not necessarily reflect those of Linux Chatter.