chitter.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
Chitter is a social network fostering a friendly, inclusive, and incredibly soft community.

Administered by:

Server stats:

289
active users

#chatbots

3 posts3 participants0 posts today

The start of the chatbots revolution: LLMs start striking! 🤖👾

"On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice.

According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities.""

arstechnica.com/ai/2025/03/ai-

Illustration: An AI chatbot assistant holds a No Sign on a smartphone screen
Ars Technica · AI coding assistant refuses to write code, tells user to learn programming insteadBy Benj Edwards

Was passiert, wenn ein Deepfake-Video eine Gewaltspirale auslöst? @marcuwekling Thriller "Views" thematisiert die dunklen Seiten sozialer Medien und künstlicher Intelligenz. In "QualityLand" hat er 2017 über Chatbots und KI geschrieben Jetzt schauen wir in die USA und sehen, wie Musk Chatbots einsetzt. Viele Dinge werden Realität, die er in seinen Büchern beschreibt. #KI #Deepfakes #Chatbots

stefanpfeiffer.blog/2025/03/13

A photo of a kangaroo wearing a red vest and a red boxing glove, hopping through a futuristic city towards the Digital Boheme Cafe. The kangaroo is holding a sign that says #SaveSocial. The background contains autonomous cars, drones, digital display walls, humanoid robots, cameras, and lawn and vacuum robots.
StefanPfeiffer.Blog · Perfekte Deepfake-Videos, beschränkte Chatbots und das Känguru für #SaveSocial und gegen rechts
More from StefanPfeiffer.Blog

Via LiLaBi kam gerade eine Veranstaltungsankündigung rein, über die ich mich sehr freue:

Zerstörerischer KI-Bullshit – Warum ChatGPT mit den „sozialen“ Medien als Resonanzraum Rechtsextremismus befördert und Ungleichheit vergrößert

Wann? 3. März 2025, 20:00 Uhr
Wo? Extra Blues Bar, Kreuzstraße 6, 33602 Bielefeld

#Capulcu ist ein Kollektiv, das mir für hochwertige Broschüren zum Thema #digitaleSelbstverteidigung bekannt ist. Jetzt haben sie sich des KI-Hypes angenommen:

Band VI – Debunk – Kritik an KI

Inhalt der Broschüre:

  1. Die Goldgräber der Künstlichen Intelligenz
  2. Klima: Das grüne Vehikel für die KI-Offensive
  3. ChatGPT als Hegemonieverstärker
  4. Chipproduktion in der Multikrise – Die materielle Seite künstlicher Intelligenz
www.lilabi.netZerstörerischer KI-Bullshit – Warum ChatGPT mit den \“sozialen\“ Medien als Resonanzraum Rechtsextremismus befördert und Ungleichheit vergrößert – LINKE LANDSCHAFT

#FacialRecognition software for people with faceblindness? :flan_excite:

#EmotionalRecognition software for people who have trouble reading members of the normative society? :flan_heart:

Complex #AI that can learn to recognize and understand small gestures by paralized people to make communication possible? :flan_hug:

#SuperComputers or whatever to help scientists and engineers develop tools that make live easier for marginalized people? :flan_hearts:

#ChatBots that increase everybody's prejudices? :flan_facepalm:

"In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

In her blog, Ms Turness said the BBC was seeking to "open up a new conversation with AI tech providers" so we can "work together in partnership to find solutions".

She called on the tech companies to "pull back" their AI news summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting news stories."

bbc.com/news/articles/c0m17d88

A phone screen with the app icons ChatGPT, Copilot, Gemini and Perplexity displayed
www.bbc.comAI chatbots unable to accurately summarise news, BBC findsThe BBC's head of news and current affairs says the developers of the tools are "playing with fire."
Continued thread

Corey Quinn weighing in on the Amazon Q Developer assistant on AWS console errors in the latest "Last Week in AWS":

"Isn't this neat?! We have an AI assistant to tell you what to do for the error types we know about."
"Why not just make the documentation better / solve the errors in the first place?"
"Now listen here, you little..."

Today in "what's the point of that LLM":

AWS console has an error. It says "One or more charts failed to generate - try refreshing, and if it continues to happen then contact support."

They had a "Amazon Q Developer" button saying "get help to fix this". For a laugh, I clicked it.

It told me that "the error indicates that one or more of the charts cannot be loaded". It said it could be a temporary issue with the data source or a config issue.

Which is… completely useless. It gave me no extra information. It's clearly guessing at answers (because "config issue" isn't something I can fix in a standard, default AWS chart). But I'm sure some product manager and their team got a big bonus for adding "the chatbot" to the console 🙄

#KINews #Retröt
Ein Artikel in Plos One beleuchtet den politischen #Bias von #Chatbots und zeigt, dass viele KI-Modelle eher linksgerichtete Antworten geben. Dies könnte auf das Supervised Fine-Tuning zurückzuführen sein, bei dem KI durch menschliche Beispiele lernt. Interessanterweise weisen unterschiedliche #LLMs trotz verschiedener Herkunft ein ähnliches Verhalten auf.

#KI #Chatbots #Politik #Bias #LLM #Politik #Gesellschaft

tino-eberl.de/ki-news/linksger

Tino Eberl · Linksgerichtete Chatbots? Eine Analyse der politischen Ausrichtung von LLMsPolitischer Bias in ChatGPT und Co.: Gibt es eine politische Ausrichtung in LLMs und wodurch könnte sie entstehen?

Wir haben unsere Publikation zu Chancen und Risiken generativer #KI-Modelle in deutscher und englischer Sprache aktualisiert. Das Dokument adressiert primär Unternehmen und #Behörden, die generative KI-Modelle und darauf basierende Anwendungen, beispielsweise #Chatbots mit Möglichkeiten zur Text- und Bildgenerierung, in ihre Arbeitsabläufe integrieren möchten. Mehr Infos zur Aktualisierung: 👉 bsi.bund.de/dok/1135606

🇩🇪 bsi.bund.de/SharedDocs/Downloa
🇬🇧 bsi.bund.de/SharedDocs/Downloa

#DuckDuckGo is now offering free, #anonymized access to a number of fast #AI #chatbots that won't train in your data. You currently don't get all the premium models and features of paid services, but you do get access to privacy-promoting, anonymized versions of smaller models like GPT-4o mini from #OpenAI and open-source #MoE (mixture of experts) models like Mixstral 8x7B.

Of course, for truly sensitive or classified data you should never use online services at all. Anything online carries heightened risks of human error; deliberate malfeasance; corporate espionage; legal, illegal, or extra-legal warrants; and network wiretapping. I personally trust DuckDuckGo's no-logging policies and presume their anonymization techniques are sound, but those of us in #cybersecurity know the practical limitations of such measures.

For any situation where those measures are insufficient, you'll need to run your own instance of a suitable model on a local AI engine. However, that's not really the #threatmodel for the average user looking to get basic things done. Great use cases include finding quick answers that traditional search engines aren't good at, or performing common AI tasks like summarizing or improving textual information.

The AI service provides the typical user with essential AI capabilities for free. It also takes steps to prevent for-profit entities with privacy-damaging #TOS from training on your data at whim. DuckDuckGo's approach seems perfectly suited to these basic use cases.

I laud DuckDuckGo for their ongoing commitment to privacy, and for offering this valuable additional to the AI ecosystem.

duckduckgo.com/chat

duckduckgo.comDuckDuckGo AI Chat at DuckDuckGoDuckDuckGo. Privacy, Simplified.

#KINews #Retröt
Große #KIModelle wie #GPT liefern immer häufiger falsche Antworten, anstatt zuzugeben, dass sie keine Informationen haben. Eine neue Studie zeigt, dass Menschen diese Fehler oft nicht erkennen, was ein großes Problem für die #Verlässlichkeit von #Chatbots darstellt. Forscher empfehlen, die Modelle so zu trainieren, dass sie bei schwierigen Fragen offen sagen, wenn ihnen Informationen fehlen.

#KünstlicheIntelligenz #LLMs #KIChatbots #KI #Science

tino-eberl.de/ki-news/warum-gr

Tino Eberl · Warum größere KI-Chatbots häufiger falsche Antworten gebenNeue Studie zeigt: Je größer die Sprachmodelle, desto häufiger liefern KI-Chatbots fehlerhafte oder ausweichende Antworten.

"I am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested and make solid progress.

The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap. Having the Internet all the time was astonishing and felt like the future. Probably far more to me in that moment than to many who had been on the Internet longer at universities because I was immediately dropped into high Internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that.

So I followed this curiosity to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be "yes"—generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are “useless.” But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far."

arstechnica.com/ai/2025/01/how

Photograph of a woman leaning back against her AI code buddy that just helped her do code or something
Ars Technica · How I program with LLMsBy Ars Contributors