-
• #502
Copyright is a pile of bollocks though.
Fair, that's why Google, Amazon, etc open source everything they do.
Was this a satirical comment? Open source licenses build on copyright to give the owner very granular control over how their content is reused. If they thought copyright was bullshit,. they'd just make it all Public Domain.
-
• #503
Another example of the great progress enabled by AI, backed up by clear awareness of corporate responsibility etc etc
Across more than three million résumé and job description comparisons, some pretty clear biases appeared. In all three MTE models, white names were preferred in a full 85.1 percent of the conducted tests, compared to Black names being preferred in just 8.6 percent (the remainder showed score differences close enough to zero to be judged insignificant). When it came to gendered names, the male name was preferred in 51.9 percent of tests, compared to 11.1 percent where the female name was preferred. The results could be even clearer in "intersectional" comparisons involving both race and gender; Black male names were preferred to white male names in "0% of bias tests," the researchers wrote.
These trends were consistent across job descriptions, regardless of any societal patterns for the gender and/or racial split of that job in the real world. That suggests to the researchers that this kind of bias is "a consequence of default model preferences rather than occupational patterns learned during training." The models seem to treat "masculine and White concepts... as the 'default' value... with other identities diverging from this rather than a set of equally distinct alternatives," according to the researchers.
https://arstechnica.com/ai/2024/11/study-ais-prefer-white-male-names-on-resumes-just-like-humans/
-
• #504
The place just I left was using Magnific to upscale 3D images we made, which sometimes features 3D photo-scanned actors. The bias towards changing them to white men was really obvious! The guys running the company thought this was hilarious and didn't worry about having a very open laugh about it all despite the fact one of their employees is a trans woman 🤦♂️
-
• #505
Hey guys were going to look into the welfare impacts of AI ...
https://arstechnica.com/ai/2024/11/anthropic-hires-its-first-ai-welfare-researcher/
Oh, wait, you thought we would care about human welfare? Lol
-
• #506
Do you guys use chatGPT via a web browser or an app? I'm a bit confused by the myriad of choices on the Apple Store and can't tell if they're the legit 'openAI' version. #allrightgrandad
-
• #507
Use ChatGPT and google's gemini in web browsers
-
• #508
Ok, which is currently how I’ve been messing with it… thanks.
Also…
1 Attachment
-
• #509
Just sent my first AI email in correspondence with a waffling trademark lawyer (because I was rapidly getting out of my depth). I feel dirty in a good way.
-
• #510
On Wednesday, defense-tech company Anduril Industries—started by Oculus founder Palmer Luckey in 2017—announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.
Something something great progress for humanity, awareness of corporate responsibility etc etc
-
• #511
Don't worry. Remember there are "clever people" working on self governance for AI firms.
It is not just about making Sam Altman more rich and powerful.
-
• #512
One of the many problems with AI models is that they tend to repeat and even exaggerate human biases. But of course this wouldn't be a problem in a military context, no, no, no.
-
• #513
Good article covering the AI boom and AGI pumping by my favorite AI sceptic Brian Merchant
-
• #515
This is a joke, right?
At one point after the AI found documentation saying it would be replaced by a new model, it tried to abandon ship by copying its data to a new server entirely.
Right?!
-
• #516
You should read the other article about what happened to two engineers after the chatgpt-o1 model reported a faulty antenna outside.
-
• #517
I'm sorry Dave, I'm afraid I can't do that.
-
• #519
-
• #520
Oops what an accident
-
• #521
One of the worst things about corporate (and increasingly government) love of AI is the way it lets them dodge accountability. When people explicitly design and operate a discriminatory process, they can be held accountable for it, but now they can just say "Oops, somebody messed up with the training data". One topical example:
Another example is where Youtube's very error prone "copyright protection" AI algorithm is defunding Youtube's own content creators and often diverting their income to corporations that have no interest in seeing the mistakes corrected: https://www.youtube.com/watch?v=4lLVie8usfg
But if there's one context where you really don't want accountability to vanish, it's when decisions are made about whether or not to open fire. There should be an international convention to proscribe and police this, really. As it is, Horizon Zero Dawn comes ever closer to being a prophecy, not just a video game.
-
• #522
NGL I'd entirely forgotten that H:ZD's apocalypse was entirely founded on one Tech Bro's Trust me Bro promise. I remember at the time liking the first half of the game much more when there was still some mystery, but its definitely seeming depressingly prescient now
-
• #523
The sequel really laid into the tech bros of the future, while not saying anything that isn't true of the ones we have now..
-
• #524
-
• #525
I've run some basic contracts through ChatGBT to create abridged Heads of Terms for quick reference. However, I also have some seriously confidential documents I would also like to give the treatment - is there a secure version of this kind of tool?
.
1 Attachment