chitter.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
Chitter is a social network fostering a friendly, inclusive, and incredibly soft community.

Administered by:

Server stats:

286
active users

#generativeAI

25 posts19 participants5 posts today
Replied in thread

«Todas las imágenes realizadas por IA se parecen bastante, con un dibujo fotorrealista con gente anodina y que responde a cánones de belleza estrictos: narices pequeñas, pómulos altos, cuellos imposiblemente largos y los ojos son de dos personas distintas» explica el ilustrador David López a El País
.
.
#generativeAI #books #editorial

I remember a media crisis in the 1980s when many of my colleagues became wine traders to survive. Now I read about that photographer who tries to survive by opening a wine bar. This time it's AI destroying the jobs of creative professionals: theguardian.com/technology/202

Perhaps we should train AI to drink wine? (sarcasmAlarm)

The Guardian · ‘It’s happening fast’ – creative workers and professionals share their fears and hopes about the rise of AIBy Jem Bartholomew

"The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.

The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report. The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves."

eff.org/deeplinks/2025/03/anch

Electronic Frontier Foundation · Anchorage Police Department: AI-Generated Police Reports Don’t Save TimeThe Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming...

"Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation. They reduce the visibility of the original sources, leaving people unaware that they can or should contribute towards maintaining such valuable projects. AI companies should want a thriving open access ecosystem, ensuring that the models they trained on Wikipedia in 2020 can be continually expanded and updated. Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.

And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales. (Though perhaps anyone who has observed AI companies’ activities more generally will be unsurprised to see that they do not act as though they believe their businesses will be sustainable on the order of years.)

It would be very wise for these companies to immediately begin prioritizing the ongoing health of the commons, so that they do not wind up strangling their golden goose. It would also be very wise for the rest of us to not rely on AI companies to suddenly, miraculously come to their senses or develop a conscience en masse.

Instead, we must ensure that mechanisms are in place to force AI companies to engage with these repositories on their creators' terms."

citationneeded.news/free-and-o

Citation Needed · “Wait, not like that”: Free and open access in the age of generative AIThe real threat isn’t AI using open knowledge — it’s AI companies killing the projects that make knowledge free

Everything you say to your Echo will be sent to Amazon starting on March 28
arstechnica.com/gadgets/2025/0

Amazon's latest bait-and-switch is shockingly bad. If you have Amazon Echo devices, say goodbye to privacy in your own home. Soon you'll be forced into a subscription where you pay for them to record everything you say and feed it to Amazon's Alexa LLM / AI training.

There's never been a better time to switch to private, secure, on-premises home automation and voice assistants such as Home Assistant.

In this photo illustration, Echo Dot smart speaker with working Alexa with blue light ring seen displayed.
Ars Technica · Everything you say to your Echo will be sent to Amazon starting on March 28By Scharon Harding

The start of the chatbots revolution: LLMs start striking! 🤖👾

"On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice.

According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities.""

arstechnica.com/ai/2025/03/ai-

Illustration: An AI chatbot assistant holds a No Sign on a smartphone screen
Ars Technica · AI coding assistant refuses to write code, tells user to learn programming insteadBy Benj Edwards

"Building on our previous research, the Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot.

We found that…

- Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
- Premium chatbots provided more confidently incorrect answers than their free counterparts.
- Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
- Generative search tools fabricated links and cited syndicated and copied versions of articles.
- Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.

Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested."

cjr.org/tow_center/we-compared

Columbia Journalism ReviewWe Compared Eight AI Search Engines. They’re All Bad at Citing News.AI search tools are rapidly gaining in popularity, with nearly one in four Americans now saying they have used AI in place of traditional search engines. These tools derive their value from crawling the internet for up-to-date, relevant information—content that is often produced by news publishers.  Yet a troubling imbalance has emerged: while traditional search […]
Continued thread

*Many* interesting questions and responses here, but this one is perhaps most illuminating:

“Overall, how will the increased use of LLMs affect the quality of people’s daily lives over the next 10 years? The impact of LLMs will be…”

More positive than negative: 28%
More negative than positive: 20%
Equally positive and negative: 32%
There won’t be much of an impact: 6%
Don’t know: 14%

Gee, people are so-o-o-o excited about #GenerativeAI aren't they? 😂