-
https://www.nvidia.com/en-gb/ai-data-science/trustworthy-ai/
https://aws.amazon.com/ai/responsible-ai/
https://ai.google/responsibility/principles/
https://newsroom.arm.com/blog/arm-ai-trust-manifestoIt's a known issue in the industry that CEOs and their boards know they have to address, because if they don't then Governments will force them to.
Look at device security, lots of manufacturers are selling devices that are laughably easy to compromise. The industry has tried to address this but too many rogue companies have ignored it, so now there is legislation either in place, like the UK PSTI Act, or coming, the EU's Cyber Resilience Act, and it's going to have a massive impact on companies, i.e. Hoover have disabled their connected device service in the UK because it doesn't meet the PSTI requirements.
I know it's fun to dunk on Tech Bros, but there are a lot of people and companies that are acutely aware of their societal responsibilities and there is a long track record of successful industry-wide collaboration to improve and standardise areas of concern.
-
I know it's fun to dunk on Tech Bros, but there are a lot of people and companies that are acutely aware of their societal responsibilities
I hate to feel that I'm reducing the whole argument to "capitalism bad", but regardless of the merits of a few individuals it's hard not to feel that the system is structured to deliver poor outcomes. Moving fast and breaking things is fine for an app menu, less so for society.
My only hope is that we are seeing a disproportionate coverage of AI in the creative sectors because journalists have a vested interest in promoting the issue. In the UK at least I think that fintech is miles ahead in terms of which tech sector has the largest investment. And ultimately making efficiencies in banking is less critical than destroying every cool job so you can turn out a shitter product with a better margin.
-
I know it's fun to dunk on Tech Bros, but there are a lot of people and companies that are acutely aware of their societal responsibilities and there is a long track record of successful industry-wide collaboration to improve and standardise areas of concern.
There’s also a long track record of obfuscation, misdirection and kicking the can down the road.
The Information carried a piece on snapchat this week. Their claim is that they’re “designed to be safe”. Contrast that with their director of security engineering writing (internaly) on the subject of child sexual abuse material: “that’s fine it’s been broken for ten years we can tolerate tonight”. Or this nugget from their director of public policy “ There’s only so many times we can say ‘safety by design’ or ‘we’re kind’ Politicians and regulators are looking for tangible substantive progress/ initiatives”
There may well be well meaning individuals, but any sense that you can trust the companies as a whole to do the right thing is, at best, misguided.
Lobbyists got to you, huh?
Jokes aside, which ones and how?