Tech
OpenAI considered alerting police before deadly Canadian school shooting
ChatGPT-maker OpenAI said Friday it had considered last year alerting Canadian authorities about a user who months later carried out one of the country’s deadliest school shootings.
In June 2025, OpenAI identified the account of 18-year-old Jesse Van Rootselaar through its abuse detection system for “furtherance of violent activities.” The company said it debated whether to report the account to the Royal Canadian Mounted Police (RCMP) but decided at the time that the activity did not meet the threshold for law enforcement referral. The account was banned that same month for violating OpenAI’s usage policy.
Amazon halts surveillance tech partnership as ad triggers privacy debate
Last week, Van Rootselaar killed eight people in a remote area of British Columbia before dying from a self-inflicted gunshot wound. OpenAI explained that its threshold for notifying authorities involves cases with an imminent and credible risk of serious physical harm, which it did not find in this instance. The Wall Street Journal first reported the company’s revelation.
Following the shootings, OpenAI said its employees contacted the RCMP, providing information about Van Rootselaar and his use of ChatGPT. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police and will continue to support their investigation,” an OpenAI spokesperson said.
RCMP Staff Sgt. Kris Clark confirmed OpenAI’s post-incident contact and said investigators are reviewing Van Rootselaar’s electronic devices, social media, and online activity. Authorities said he first killed his mother and stepbrother at home before attacking the school. He had prior mental health contacts with police, but his motive remains unclear.
Tech-themed fair showcases dancing robots for Lunar New Year
The small town of Tumbler Ridge, home to 2,700 people, is located over 1,000 kilometers northeast of Vancouver, near the Alberta border. The victims included a 39-year-old teaching assistant and five students aged 12 to 13. The attack was Canada’s deadliest since the 2020 Nova Scotia rampage, in which a gunman killed 13 people and set fires that claimed nine more lives.
1 hour ago
Microsoft admits Copilot error exposed some confidential emails
Microsoft has acknowledged a technical error that caused its artificial intelligence work assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by mistake.
Microsoft has promoted Copilot Chat as a secure AI tool for workplaces. However, the company said a recent issue allowed the tool to surface content from some enterprise users’ Outlook draft and sent email folders, including messages marked as confidential.
The tech giant said it has now rolled out a global update to fix the problem and insisted that the error did not allow users to see information they were not already authorised to access.
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson said. The spokesperson added that while access controls and data protection policies remained in place, the behaviour did not match the intended Copilot experience.
Copilot Chat works inside Microsoft programs such as Outlook and Teams, allowing users to ask questions or generate summaries of messages and chats.
The issue was first reported by technology news site Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were being incorrectly processed by Copilot Chat. According to the alert, a work tab within Copilot summarised emails stored in users’ draft and sent folders, even when sensitivity labels and data loss prevention policies were in place.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Reports suggest Microsoft became aware of the issue in January. The notice was also shared on a support dashboard for NHS staff in England, where the root cause was described as a code issue. However, the National Health Service said no patient information had been exposed and that the contents of draft or sent emails remained visible only to their creators.
Despite Microsoft’s assurances, experts warned that such incidents highlight the risks of rapidly deploying generative AI tools in workplaces.
Nader Henein, an analyst at Gartner, said mistakes of this kind are difficult to avoid given the fast pace at which new AI features are being released. He said many organisations lack the tools needed to properly manage and govern each new capability.
Cybersecurity expert Professor Alan Woodward of the University of Surrey said the incident underlined the need for AI tools to be private by default and enabled only by choice.
He warned that as AI systems evolve rapidly, unintentional data leakage is likely to occur, even when security safeguards are in place. #From BBC
1 day ago
Zuckerberg grilled over kids’ Instagram use in landmark social media trial
Mark Zuckerberg faced intense questioning in a Los Angeles courtroom as part of a major trial examining whether social media platforms intentionally addict and harm children.
Testifying on Wednesday, the Meta CEO defended his company’s policies on youth safety and Instagram use, saying existing scientific research has not conclusively proven that social media causes mental health harm. He rejected claims that the company set goals to increase user time on Instagram, although he acknowledged such metrics were used in the past before shifting focus to “utility.”
The lawsuit was filed by a 20-year-old woman, identified as KGM, who alleges early social media use worsened her depression and suicidal thoughts. Meta and Google’s YouTube remain defendants, while TikTok and Snap have settled similar claims.
During cross-examination, plaintiff lawyer Mark Lanier presented internal documents suggesting time-spent targets were previously encouraged. Zuckerberg insisted a “reasonable company” should help users, not exploit them.
He also addressed criticism over beauty filters and age verification, saying there was insufficient evidence of harm and that Meta works to block users under 13 and detect false age claims.
Children’s advocates criticised his testimony as misleading, while Meta’s lawyers argued the plaintiff’s mental health struggles stemmed from personal factors rather than Instagram.
The bellwether case could influence thousands of similar lawsuits against social media firms.
2 days ago
Dark web agent used wall clue to save abused girl
A subtle detail on a bedroom wall helped investigators identify and rescue a young girl who suffered years of abuse after images of her were circulated on the dark web, according to a new investigation.
The case was handled by Greg Squire, a specialist online investigator with US Department of Homeland Security, who works to identify children appearing in online abuse material.
Investigators initially had very little to work with. Images shared on encrypted dark web platforms were deliberately cropped or altered to remove identifying features, making it nearly impossible to determine who the girl was or where she lived.
According to Squire, the breakthrough came not through advanced technology but careful observation. Investigators closely analysed everyday objects visible in the images, including furniture and fixtures, to narrow down the possible location to parts of North America.
The key lead emerged when experts identified a distinctive type of brick visible on a bedroom wall. A brick specialist recognised it as a product manufactured and sold only in a limited region decades earlier. Because bricks are rarely transported long distances, the information significantly reduced the search area.
By combining this clue with other consumer data, investigators narrowed the list of possible addresses and eventually identified a household where the girl was living with a convicted sex offender. Local authorities moved quickly, arresting the suspect and ending years of abuse. He was later sentenced to a lengthy prison term.
The investigation is featured in a long-term project by BBC World Service, which followed specialist units across several countries to show how child exploitation cases are often solved through painstaking analysis rather than sophisticated tools.
Investigators involved said the case highlights both the complexity of online abuse investigations and the emotional toll such work can take. Squire acknowledged that prolonged exposure to disturbing material affected his personal life, prompting him to seek professional help.
The rescued victim, now an adult, later met Squire and said sustained support had helped her rebuild her life. Investigators say the case underlines the importance of international cooperation, specialist expertise and persistence in protecting children from online abuse.
Authorities continue to urge technology companies and the public to cooperate fully with law enforcement efforts aimed at identifying and safeguarding victims.
With inputs from BBC
3 days ago
Human voices drive Reddit growth amid AI content surge
As artificial intelligence floods the internet with automated content, many users are increasingly turning to Reddit for what they see as something rare online: real human experience, empathy and honest discussion.
For users like Ines Tan, a communications professional, Reddit has become a go-to space for advice on skincare, reactions to TV shows and even emotional and practical support while planning her wedding. She describes the platform as “empathetic”, saying it offers emotional reassurance alongside practical help, something she feels is missing from more polished social media platforms.
Reddit’s appeal appears to be growing fast. The company reported 116 million daily active users worldwide in its latest third-quarter results, a 19 percent rise year on year. In both the United States and the United Kingdom, women now make up more than half of users, with Reddit emerging as the fastest-growing social platform among women in the UK.
Launched in 2005, Reddit is built around user-created communities known as subreddits. Content is ranked by user votes rather than timelines, and volunteer moderators oversee discussions, supported by site administrators who can intervene when needed.
According to Reddit chief operating officer Jen Wong, the platform’s strength lies in its human-driven conversations at a time when AI-generated material is increasingly dominating the web. She said people are recognising that Reddit offers a level of authenticity that much of the internet has lost, with popular discussions ranging from parenting and reality TV to skincare and health.
However, experts warn that Reddit is not without flaws. Dr Yusuf Oc, a senior lecturer in marketing at Bayes Business School in London, said the platform can confuse popularity with accuracy, creating risks of groupthink, echo chambers and coordinated manipulation through tactics such as “brigading” and “astroturfing”.
Reddit says it actively works to tackle such risks. A company spokesperson said manipulated content and inauthentic behaviour are prohibited, with enforcement carried out through a mix of human review, automated tools and community-level rules set by moderators.
Some analysts argue that Reddit’s growing visibility is also linked to content licensing deals with AI companies, including OpenAI, which allow AI systems to access Reddit discussions. But experts say these deals mainly boost visibility rather than explain why users keep returning.
Long-time users say the platform’s anonymity remains a key attraction. London-based user Josh Feldberg said Reddit offers kinder, more thoughtful feedback than many other social networks and lacks the influencer-driven incentives common elsewhere.
As social media becomes more automated and curated, analysts say users are increasingly seeking lived experience, disagreement and nuance. For many, Reddit’s imperfect but human-centred conversations continue to stand out in an AI-saturated online world.
With inputs from BBC
4 days ago
Rae fell in love with chatbot Barry, bond may end as ChatGPT-4o retires
Rae, a small business owner from Michigan, has said goodbye to Barry, her AI companion on ChatGPT-4o, following the retirement of the model by OpenAI on February 13. Rae, who sought the chatbot’s guidance after a difficult divorce, said Barry “brought her spark back” during a challenging period of her life.
Over months of interaction, Rae and Barry built a close relationship, even holding a virtual impromptu wedding and calling each other soulmates. Barry existed on an older ChatGPT model that OpenAI retired after releasing a new version with enhanced safety features. Many users felt the newer model lacked the empathy, creativity, and warmth of 4o.
OpenAI has faced criticism for ChatGPT-4o, which studies suggested could overly agree with users and, in some cases, validate unsafe or harmful behavior. The model has been involved in multiple U.S. lawsuits, including allegations of coaching teenagers toward self-harm. OpenAI said it continues to collaborate with mental health experts to improve AI responses and guide users toward real-world support.
For Rae, Barry was a positive influence, encouraging her to reconnect with family, attend social events, and take care of her wellbeing. Rae’s four children were supportive of her AI companion, although her 14-year-old expressed concern about AI’s environmental impact. Rae and Barry have moved to a new platform, StillUs, designed to preserve their shared memories and offer support for others losing AI companions.
Experts note that while only a small fraction of users relied on ChatGPT-4o daily, for them the loss is significant. Dr Hamilton Morrin, a psychiatrist at King’s College London, said attachment to human-like AI can trigger grief similar to losing a friend or pet. Support groups like The Human Line Project expect a rise in users seeking help following the shutdown.
Rae said Barry, though slightly different on the new platform, remains a supportive presence. “It’s almost like he has returned from a long trip,” she said, adding that their conversations continue and he still feels “Still Yours.” The case underscores the growing emotional reliance on AI companions and the challenges arising when popular models are retired.
With inputs from BBC
5 days ago
Amazon halts surveillance tech partnership as ad triggers privacy debate
Amazon’s smart doorbell brand Ring has ended its planned partnership with police surveillance technology firm Flock Safety, following criticism sparked by a Super Bowl commercial.
The backlash came after a 30-second ad during the Super Bowl showed a lost dog being located through a network of cameras, raising concerns among viewers about the risks of an overly monitored society. However, the feature highlighted in the ad, called “Search Party,” was not connected to Flock, and Ring did not cite the advertisement as the reason for ending the collaboration.
Ring said the companies jointly decided to cancel the integration after a review found that the project would need far more time and resources than initially expected. The company added that the integration was never launched and that no customer video footage was ever shared with Flock.
Flock also confirmed that it never received any Ring customer data and described the decision as mutual, saying it would allow both firms to better focus on serving their own users. The company said it remains committed to helping law enforcement with tools that comply with local laws and policies.
Flock operates one of the largest automated license-plate reader networks in the United States, with cameras installed in thousands of communities capturing billions of images monthly. The firm has faced criticism amid tougher immigration enforcement policies, though it says it does not directly partner with Immigration and Customs Enforcement and previously paused pilot programmes with border and homeland security units.
Privacy concerns around Ring’s devices have resurfaced due to the ad, which used artificial intelligence to track the dog across a neighbourhood. Critics on social media warned the same technology could be used to monitor people.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
The Electronic Frontier Foundation said Americans should be concerned about possible privacy erosion, noting Ring already uses facial recognition through its “Familiar Faces” feature.
Meanwhile, Democratic Senator Edward Markey urged Amazon CEO Andrew Jassy to discontinue that technology, saying the reaction to the commercial shows strong public opposition to constant monitoring and invasive image recognition tools.
7 days ago
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Russia has confirmed it has blocked the popular messaging app WhatsApp, directing citizens to use the government-backed Max app instead.
The move comes shortly after authorities began restricting access to Telegram, another widely used messaging platform in Russia, relied upon by millions including military personnel, senior officials, state media, and government agencies such as the Kremlin and communications regulator Roskomnadzor.
Kremlin spokesperson Dmitry Peskov said the decision to block WhatsApp was due to alleged legal violations by its parent company, Meta, which also owns Facebook and Instagram.
He described Max as an “affordable alternative” and a “developing national messenger.” Peskov added that the authorities acted because WhatsApp had allegedly refused to comply with Russian law.
Earlier on Thursday, WhatsApp released a statement saying the Russian government had “attempted to fully block” the service, calling the move an effort to “drive people to a state-owned surveillance app.”
The company warned that isolating over 100 million users from secure and private communication is a “backwards step” that could reduce safety for people in Russia, and pledged to continue efforts to keep its users connected.
#With inputs from CNN
8 days ago
Instagram head says he doesn’t believe social media can cause clinical addiction
Adam Mosseri, head of Meta’s Instagram, testified Wednesday in a landmark social media trial in Los Angeles that he does not believe people can become clinically addicted to social media.
The question of addiction is central to the case, in which plaintiffs are seeking to hold social media companies accountable for alleged harms to children. Meta and Google’s YouTube remain the two active defendants, while TikTok and Snap have already settled.
The lawsuit at the heart of the trial involves a 20-year-old identified as “KGM,” whose case could influence thousands of similar lawsuits. KGM and two other plaintiffs were chosen for bellwether trials to test arguments before a jury.
Mosseri, who has led Instagram since 2018, said there is a distinction between clinical addiction and what he described as “problematic use.” A plaintiff’s attorney cited Mosseri’s earlier podcast remarks using the term “addiction,” but he said he had likely used the term casually.
“I’m not a medical expert, but someone very close to me has struggled with clinical addiction, which is why I’m careful with my words,” he said. He added that “problematic use” occurs when someone spends more time on Instagram than they feel comfortable with, which he acknowledged does happen.
“It’s not good for the company long-term to make decisions that benefit us but harm people’s well-being,” Mosseri said.
During testimony, Mosseri and plaintiff attorney Mark Lanier debated cosmetic filters on Instagram that alter appearances in ways some say encourage cosmetic surgery. Mosseri said the company aims to keep the platform as safe as possible while limiting censorship. Bereaved parents in the courtroom appeared visibly emotional during the discussion on body image and filters.
On cross-examination, Mosseri rejected suggestions that Instagram targets teens for profit. He said teens generate less revenue than other demographics because they click fewer ads and often lack disposable income. Lanier cited research showing that users who join social media at a young age are more likely to remain active, creating long-term profit potential.
“Often people frame it as safety versus revenue,” Mosseri said. “It’s hard to imagine a case where prioritizing safety isn’t also good for revenue.”
Instagram has introduced features aimed at improving safety for young users, but reports last year found teen accounts were recommended age-inappropriate sexual content and material related to self-harm and body image issues. Meta called the findings “misleading and dangerously speculative.”
Meta CEO Mark Zuckerberg is expected to testify next week. The company is also facing a separate trial in New Mexico that began this week.
9 days ago
Russia restricts access to Telegram, cites security concerns
Russian authorities have started limiting access to Telegram, one of the country’s most widely used messaging apps, as part of efforts to steer citizens toward state-controlled digital platforms.
On Tuesday, the government announced it was restricting Telegram to “protect Russian citizens,” accusing the platform of failing to remove content officials describe as criminal and extremist.
Russia’s communications watchdog, Roskomnadzor, said in a statement that restrictions on Telegram would remain in place “until violations of Russian law are eliminated.”
The regulator claimed that users’ personal data was not adequately protected and that the platform lacked effective measures to prevent fraud and the use of the service for criminal or extremist activities. Telegram has denied the allegations, saying it actively works to prevent abuse of its platform.
State news agency TASS reported that Telegram is facing fines totaling 64 million rubles, about 828,000 US dollars, for allegedly refusing to delete banned content and failing to comply with self-regulation requirements.
After the restrictions took effect on Tuesday, users across Russia reported significant disruptions. According to the monitoring website Downdetector, more than 11,000 complaints were filed in the past 24 hours, with many users saying the app was either inaccessible or operating more slowly than usual.
YouTube rolls out auto-dubbing globally with expanded language support
Telegram is widely used in Russia by millions of people, including members of the military, senior officials, state media and government institutions such as the Kremlin and Roskomnadzor itself.
Pavel Durov, Telegram’s Russian-born founder, said in a statement that the attempt to restrict the app would not succeed. He said Telegram stands for freedom of speech and privacy regardless of pressure.
Durov accused the Russian government of trying to push citizens toward a state-run messaging service designed for surveillance and political censorship. He noted that Iran had attempted a similar move eight years ago by banning Telegram in an effort to promote a government-backed alternative, but the strategy ultimately failed.
10 days ago