-
• #427
[double post]
-
• #428
A canary example that has stuck in my head is translations. Which is a classic example of something that is easy on paper, and a perfect fit for Machine learning, but anyone with even a passing interest in translation knows that a good translator does a lot more than just a literal job of A-B.
Previously it was a skilled job, that some people could make a comfortable living from, but once ML became good enough on the surface level, the people making decisions stopped paying for the skilled translators, and now for the most part you can't even find someone to do that work even if you wanted to pay for the skills because people can't make a living from it any more, and everyone is stuck with good-enough literal translations with none of the nuances enabled by a skilled practitioner
-
• #429
In the race to the bottom the quality of content will go rapidly downhill but not everyone wants homogenised content.
We are already seeing this in music where there are no big bands as it’s cheaper to invest in mediocre singer songwriters like the mild mannered ginger one with the 6th form poetry drivel output.
He’s not going to trash a hotel or fall out with the lead guitarist, and no arguments about royalty splits etc.Glad I’m not a college leaver now, their future is pretty grim.
-
• #430
Glad I’m not a college leaver now, their future is pretty grim.
They will find work in an Amazon distribution centre, and achieve self-realisation by praying at the shrine to Bezos.
-
• #431
This is definitely a valid point. Full disclosure, I work at a publishers and am involved in these conversations on a daily basis. The inverse to your point about translators losing income (we're currently quite a way from the situation you describe, most of the industry does still use human translators - my company has never used an AI translator -, but that is def the direction of travel) - is the possibility for all/more books to be translated. Currently, the cost of hiring a translator (and the lack of immediate demand) means 98% of books will never get translated. If we reach a point whereby this can genuinely be automated and the output be of acceptable standard, then many more authors will have the opportunity of far greater international discovery and potential income streams.
But yes, translators, copywriters, copy editors, authors and illustrators (of 'generic' works) are absolutely on the frontline of the AI-stole-my-job battle.
-
• #432
Will the translations be of authors works or ai generated fiction?
-
• #433
Talking of Amazon,
AWS' Graviton servers are up to 40% more energy efficient than their predecessors.
...as a first choice exhibit in defence of big tech's impeccable social and environmental responsibility credentials is surely beyond parody.
-
• #434
The platitude you often hear about this is that "human-authored work will have an increased cache in an AI-world. Like vinyl." On the other hand, there are various companies/bots already spitting out AI-generated books on all topics (including fiction) into the Kindle self-published sphere and some are tricky to identify as such. So it's definitely coming. The publishing industry likes to view itself as 'curators' (many would say gatekeepers), selecting and nurturing the best writers/illustrators. If they can continue to turn this into a valued proposition, then hopefully human authors will continue to be viable, at some scale. I would be very concerned if I were a generic non-fiction author at the moment. Something the author associations are arguing for in their ongoing lobbying to governments is a requirement for AI-generated text to have a watermark or disclaimer of such, hoping to in some way safeguard the perceived value of the human.
-
• #435
https://scottbradlee.substack.com/p/the-contentapocalypse-is-coming
Worth a read and a bit more concise than my ramblings.
-
• #436
Human progress has always come at a cost, but usually it's a short term cost than benefits more people in the long term.
At the risk of invoking Godwin's law, when you're using basically the same supporting argument as the final solution, you might want to reflect on whether you're on the wrong side of the debate.
-
• #437
Just searching for a product review of something and wading through twenty hits of AI generated "reviews" which are regurgitating the specs as a narrative shows what things can lead to.
-
• #438
You invoked that, not me.
I'd use examples like the mechanisation of farming, which saw concerned farm labourers breaking the machines that replaced them, but far fewer people spend their lives doing physical hard labour out in the open all year round now.
AI can make positive efficiency gains. When I was a post-grad student, the lab I was working with had created what was then called an expert system, that was trained to assess mammograms for signs of breast cancer. It's use meant that the medical team could focus on the mammograms it had tagged as worthy of investigation, rather than having to assess every single one themselves. This reduced their workload enormously.
Personally, I don't see any real ethical concern with systems like that.
-
• #439
^ https://arstechnica.com/gadgets/2024/01/would-luddites-find-the-gig-economy-familiar/
The mechanisation of Farming also coincided with the consolidation of land and money in fewer hands
-
• #440
.
-
• #441
Human photographers will be the Artisan Sourdough to AI photography's Supermarket Sliced White in the future.
-
• #442
Noone is disputing that AI can have benefits in fields like medicine, not even Oliver on the prior page.
The point Oliver was making is that examples like benefits in medicine are held up to coopt public opinion, enabling a broadly unregulated AI revolution that has many detrimental societal consequences.
It is clear that there is widespread public concern over AI, and certainly no consent. If we lived in democratic societies there would be effective public consultation and meaningful regulation, and a freeze on most, if not all, AI development until that was concluded and implemented.
The idea that tech CEOs can self-regulate in coordination with governments is both laughable and profoundly anti-democratic.
Put more simply, if you look at Sam Altman and think 'yeah I trust that guy', read this:
-
• #443
You invoked that, not me.
Um, yeah, that's what I was saying.
And funnily enough, I had meant to mention upthread that one of the clear positive use cases was this exact type of medical application. But I would also point out this is just very sophisticated machine learning/pattern recognition. No worries there from me. Big thumbs up. But it's a world away from the much more worrying wank-fantasies of king edge lord Elon Musk and his Stans.
Edit: what @t-v just said, a hundred times more intelligently and articulately than I could.
-
• #444
I think it's too late TBH with regard to "content"/creative arts-based stuff. The genie's out of the bottle.
Consumers want cheap mass-produced shite (text, images, art, videos, sound). No one will ever be able to stop them spending money on cheap mass-produced shite. We might be able to stop Western companies training their models on this stuff for free but there is absolutely no chance we can enforce that globally. If you can't create this low-effort crap in the UK or US or wherever, the consumer will just get it from China or somewhere where people aren't so concerned about creatives having careers.
People who want good stuff will still have to pay for it to be curated or reviewed by a person (publishers, newspapers with editors, TV channels and streaming services, music reviews, etc.). We are already in the situation where there is too much "content" and 99% of it is terrible, so I don't think that will change at all.
-
• #445
I know it's fun to dunk on Tech Bros, but there are a lot of people and companies that are acutely aware of their societal responsibilities
I hate to feel that I'm reducing the whole argument to "capitalism bad", but regardless of the merits of a few individuals it's hard not to feel that the system is structured to deliver poor outcomes. Moving fast and breaking things is fine for an app menu, less so for society.
My only hope is that we are seeing a disproportionate coverage of AI in the creative sectors because journalists have a vested interest in promoting the issue. In the UK at least I think that fintech is miles ahead in terms of which tech sector has the largest investment. And ultimately making efficiencies in banking is less critical than destroying every cool job so you can turn out a shitter product with a better margin.
-
• #446
I'll tell you where AI and medical isn't going to work and that is in leg ulcers and stuff.
While the dream might be "take a picture of the leg ulcer, compare it against others, suggest the right wound dressing for the carer to apply, carer applies right dressing, weeping leg ulcer heals and person is free to move about their life" the parts at play in that scenario are:
person with leg ulcer
camera used
training set of data
stressed carer
what dressings are actually available to the carer and leg ulcer
years of experience of carer -
• #447
I know it's fun to dunk on Tech Bros, but there are a lot of people and companies that are acutely aware of their societal responsibilities and there is a long track record of successful industry-wide collaboration to improve and standardise areas of concern.
There’s also a long track record of obfuscation, misdirection and kicking the can down the road.
The Information carried a piece on snapchat this week. Their claim is that they’re “designed to be safe”. Contrast that with their director of security engineering writing (internaly) on the subject of child sexual abuse material: “that’s fine it’s been broken for ten years we can tolerate tonight”. Or this nugget from their director of public policy “ There’s only so many times we can say ‘safety by design’ or ‘we’re kind’ Politicians and regulators are looking for tangible substantive progress/ initiatives”
There may well be well meaning individuals, but any sense that you can trust the companies as a whole to do the right thing is, at best, misguided.
-
• #448
No one will ever be able to stop them spending money on cheap mass-produced shite. We might be able to stop Western companies training their models on this stuff for free but there is absolutely no chance we can enforce that globally. If you can't create this low-effort crap in the UK or US or wherever, the consumer will just get it from China or somewhere.
Ah yes, the 'because not everyone will likely be ethical, why should I have to be ethical' argument. I guess we could apply the same thing to, I dunno, criminal justice, warfare, corporate manslaughter etc etc?
-
• #449
I'm not justifying it, I'm just saying what I think will happen. Look at China's disregard for the Western patent system. We will never be able to stop unscrupulous people tapping into the huge amounts of money that can be made using generative AI
-
• #450
Sorry, should have been clearer, my comment was a rhetorical extension of that reasoning, not an implication that you were meaning to justify either the rationale or even the situation itself. We're in agreement as far as I can see.
The fact that copyright doesn't cover the right to train AIs (which is something that has only existed as a concept in the past few years) doesn't really mean it is limited in applicability. And yes, we'll get resolution via case-law. But if AI companies are so confident in their position here, why are they signing multi-million licensing arrangements with publishers across the board for training access?
Anyone asserting 'copyright is a load of bollocks' and/or advocating on the side of AI companies against content creators is going to struggle not to look like a big tech shill.