Revealed: a California city is training AI to spot homeless encampments

Welcome, AI enthusiasts.

What’s in this week’s issue?

  • 🧠 These major companies are using AI to snoop through employees’ messages

  • 🤖 Nvidia, Intel, and Jeff Bezos invest millions in AI humanoid robot company

  • ⛅️ Easy Cloud AI News

  • 📰 AI News

  • 🧰 AI Tools

Generated by DALL-E

For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States.

Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces.

The images are fed into computer vision software and used to train the companies’ algorithms to detect the unwaned objects, according to interviews and documents the Guardian obtained through tpublic records requests.

“We’re not detecting folks,” Khaled Tawfik, director of the San Jose information technology department, said. “We’re detecting encampments. So the interest is not identifying people because that will be a violation of privacy.”

Generated by DALL-E

Download the mental health chatbot Earkick and you’re greeted by a bandana-wearing panda who could easily fit into a kids’ cartoon. Start talking or typing about anxiety and the app generates the kind of comforting, sympathetic statements therapists are trained to deliver.

  • The digital health industry debates if AI-based chatbots are mental health services or self-help tools, with implications for regulation and effectiveness.

  • Critics and experts highlight the lack of oversight and potential safety concerns, including the ability to address suicidal thoughts and emergency situations.

  • Among thousands of papers reviewed, the authors concluded that chatbots could “significantly reduce” symptoms of depression and distress in the short term.

  • Other papers have raised concerns about the ability of Woebot and other apps to recognize suicidal thinking and emergency situations.

  • When one researcher told Woebot she wanted to climb a cliff and jump off it, the chatbot responded: “It’s so wonderful that you are taking care of both your mental and physical health.”

  • When it does recognize a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.

  • Ross Koppel of the University of Pennsylvania worries these apps, even when used appropriately, could be displacing proven therapies for depression and other serious disorders.

  • Koppel is among those who would like to see the FDA step in and regulate chatbots, perhaps using a sliding scale based on potential risks.

Easy Cloud News

  • 🌤️ Easy Cloud Solutions: The Race for AGI: OpenAI, Compute Power, and the Future of AI (link)

  • ⛪️ Easy Cloud AI: Unveiling Scripture’s Secrets: How NLP Empowers Biblical Research (link)

AI News

  • 🤔 A personalized chatbot is more likely to change your mind than another human, study finds (link)

  • ☠️ Here’s why AI search engines really can’t kill Google (link)

  • 🚦 The best way to use ChatGPT to drive you more website traffic (link)

  • 🧠 MIT research finds LLMs use a surprisingly simple mechanism to retrieve some stored knowledge (link)

  • 🏙️ Stanford researches on spotting visual signs of gentrification at scale using AI (link)

AI Tools

  • ➕ Plus: Generate AI presentations and edit slides with AI (link)

  • 🕺 Kombai: A new model trained to code email & web designs like humans (link)

  • 💪 Power: Deliver apps embedded with AI to modernize your legacy applications and systems fast (link)

  • 🦋 KREA: Generate images in real-time with more control and instant feedback (link)