Tech-News
OpenAI considered alerting police before deadly Canadian school shooting
ChatGPT-maker OpenAI said Friday it had considered last year alerting Canadian authorities about a user who months later carried out one of the country’s deadliest school shootings.
In June 2025, OpenAI identified the account of 18-year-old Jesse Van Rootselaar through its abuse detection system for “furtherance of violent activities.” The company said it debated whether to report the account to the Royal Canadian Mounted Police (RCMP) but decided at the time that the activity did not meet the threshold for law enforcement referral. The account was banned that same month for violating OpenAI’s usage policy.
Amazon halts surveillance tech partnership as ad triggers privacy debate
Last week, Van Rootselaar killed eight people in a remote area of British Columbia before dying from a self-inflicted gunshot wound. OpenAI explained that its threshold for notifying authorities involves cases with an imminent and credible risk of serious physical harm, which it did not find in this instance. The Wall Street Journal first reported the company’s revelation.
Following the shootings, OpenAI said its employees contacted the RCMP, providing information about Van Rootselaar and his use of ChatGPT. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police and will continue to support their investigation,” an OpenAI spokesperson said.
RCMP Staff Sgt. Kris Clark confirmed OpenAI’s post-incident contact and said investigators are reviewing Van Rootselaar’s electronic devices, social media, and online activity. Authorities said he first killed his mother and stepbrother at home before attacking the school. He had prior mental health contacts with police, but his motive remains unclear.
Tech-themed fair showcases dancing robots for Lunar New Year
The small town of Tumbler Ridge, home to 2,700 people, is located over 1,000 kilometers northeast of Vancouver, near the Alberta border. The victims included a 39-year-old teaching assistant and five students aged 12 to 13. The attack was Canada’s deadliest since the 2020 Nova Scotia rampage, in which a gunman killed 13 people and set fires that claimed nine more lives.
1 hour ago
Microsoft admits Copilot error exposed some confidential emails
Microsoft has acknowledged a technical error that caused its artificial intelligence work assistant, Microsoft 365 Copilot Chat, to access and summarise some users’ confidential emails by mistake.
Microsoft has promoted Copilot Chat as a secure AI tool for workplaces. However, the company said a recent issue allowed the tool to surface content from some enterprise users’ Outlook draft and sent email folders, including messages marked as confidential.
The tech giant said it has now rolled out a global update to fix the problem and insisted that the error did not allow users to see information they were not already authorised to access.
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop,” a Microsoft spokesperson said. The spokesperson added that while access controls and data protection policies remained in place, the behaviour did not match the intended Copilot experience.
Copilot Chat works inside Microsoft programs such as Outlook and Teams, allowing users to ask questions or generate summaries of messages and chats.
The issue was first reported by technology news site Bleeping Computer, which cited a Microsoft service alert stating that emails with confidential labels were being incorrectly processed by Copilot Chat. According to the alert, a work tab within Copilot summarised emails stored in users’ draft and sent folders, even when sensitivity labels and data loss prevention policies were in place.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Reports suggest Microsoft became aware of the issue in January. The notice was also shared on a support dashboard for NHS staff in England, where the root cause was described as a code issue. However, the National Health Service said no patient information had been exposed and that the contents of draft or sent emails remained visible only to their creators.
Despite Microsoft’s assurances, experts warned that such incidents highlight the risks of rapidly deploying generative AI tools in workplaces.
Nader Henein, an analyst at Gartner, said mistakes of this kind are difficult to avoid given the fast pace at which new AI features are being released. He said many organisations lack the tools needed to properly manage and govern each new capability.
Cybersecurity expert Professor Alan Woodward of the University of Surrey said the incident underlined the need for AI tools to be private by default and enabled only by choice.
He warned that as AI systems evolve rapidly, unintentional data leakage is likely to occur, even when security safeguards are in place. #From BBC
1 day ago
Dark web agent used wall clue to save abused girl
A subtle detail on a bedroom wall helped investigators identify and rescue a young girl who suffered years of abuse after images of her were circulated on the dark web, according to a new investigation.
The case was handled by Greg Squire, a specialist online investigator with US Department of Homeland Security, who works to identify children appearing in online abuse material.
Investigators initially had very little to work with. Images shared on encrypted dark web platforms were deliberately cropped or altered to remove identifying features, making it nearly impossible to determine who the girl was or where she lived.
According to Squire, the breakthrough came not through advanced technology but careful observation. Investigators closely analysed everyday objects visible in the images, including furniture and fixtures, to narrow down the possible location to parts of North America.
The key lead emerged when experts identified a distinctive type of brick visible on a bedroom wall. A brick specialist recognised it as a product manufactured and sold only in a limited region decades earlier. Because bricks are rarely transported long distances, the information significantly reduced the search area.
By combining this clue with other consumer data, investigators narrowed the list of possible addresses and eventually identified a household where the girl was living with a convicted sex offender. Local authorities moved quickly, arresting the suspect and ending years of abuse. He was later sentenced to a lengthy prison term.
The investigation is featured in a long-term project by BBC World Service, which followed specialist units across several countries to show how child exploitation cases are often solved through painstaking analysis rather than sophisticated tools.
Investigators involved said the case highlights both the complexity of online abuse investigations and the emotional toll such work can take. Squire acknowledged that prolonged exposure to disturbing material affected his personal life, prompting him to seek professional help.
The rescued victim, now an adult, later met Squire and said sustained support had helped her rebuild her life. Investigators say the case underlines the importance of international cooperation, specialist expertise and persistence in protecting children from online abuse.
Authorities continue to urge technology companies and the public to cooperate fully with law enforcement efforts aimed at identifying and safeguarding victims.
With inputs from BBC
3 days ago
Amazon halts surveillance tech partnership as ad triggers privacy debate
Amazon’s smart doorbell brand Ring has ended its planned partnership with police surveillance technology firm Flock Safety, following criticism sparked by a Super Bowl commercial.
The backlash came after a 30-second ad during the Super Bowl showed a lost dog being located through a network of cameras, raising concerns among viewers about the risks of an overly monitored society. However, the feature highlighted in the ad, called “Search Party,” was not connected to Flock, and Ring did not cite the advertisement as the reason for ending the collaboration.
Ring said the companies jointly decided to cancel the integration after a review found that the project would need far more time and resources than initially expected. The company added that the integration was never launched and that no customer video footage was ever shared with Flock.
Flock also confirmed that it never received any Ring customer data and described the decision as mutual, saying it would allow both firms to better focus on serving their own users. The company said it remains committed to helping law enforcement with tools that comply with local laws and policies.
Flock operates one of the largest automated license-plate reader networks in the United States, with cameras installed in thousands of communities capturing billions of images monthly. The firm has faced criticism amid tougher immigration enforcement policies, though it says it does not directly partner with Immigration and Customs Enforcement and previously paused pilot programmes with border and homeland security units.
Privacy concerns around Ring’s devices have resurfaced due to the ad, which used artificial intelligence to track the dog across a neighbourhood. Critics on social media warned the same technology could be used to monitor people.
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
The Electronic Frontier Foundation said Americans should be concerned about possible privacy erosion, noting Ring already uses facial recognition through its “Familiar Faces” feature.
Meanwhile, Democratic Senator Edward Markey urged Amazon CEO Andrew Jassy to discontinue that technology, saying the reaction to the commercial shows strong public opposition to constant monitoring and invasive image recognition tools.
7 days ago
Russia blocks WhatsApp, urges citizens to switch to state-backed Max app
Russia has confirmed it has blocked the popular messaging app WhatsApp, directing citizens to use the government-backed Max app instead.
The move comes shortly after authorities began restricting access to Telegram, another widely used messaging platform in Russia, relied upon by millions including military personnel, senior officials, state media, and government agencies such as the Kremlin and communications regulator Roskomnadzor.
Kremlin spokesperson Dmitry Peskov said the decision to block WhatsApp was due to alleged legal violations by its parent company, Meta, which also owns Facebook and Instagram.
He described Max as an “affordable alternative” and a “developing national messenger.” Peskov added that the authorities acted because WhatsApp had allegedly refused to comply with Russian law.
Earlier on Thursday, WhatsApp released a statement saying the Russian government had “attempted to fully block” the service, calling the move an effort to “drive people to a state-owned surveillance app.”
The company warned that isolating over 100 million users from secure and private communication is a “backwards step” that could reduce safety for people in Russia, and pledged to continue efforts to keep its users connected.
#With inputs from CNN
8 days ago
Instagram head says he doesn’t believe social media can cause clinical addiction
Adam Mosseri, head of Meta’s Instagram, testified Wednesday in a landmark social media trial in Los Angeles that he does not believe people can become clinically addicted to social media.
The question of addiction is central to the case, in which plaintiffs are seeking to hold social media companies accountable for alleged harms to children. Meta and Google’s YouTube remain the two active defendants, while TikTok and Snap have already settled.
The lawsuit at the heart of the trial involves a 20-year-old identified as “KGM,” whose case could influence thousands of similar lawsuits. KGM and two other plaintiffs were chosen for bellwether trials to test arguments before a jury.
Mosseri, who has led Instagram since 2018, said there is a distinction between clinical addiction and what he described as “problematic use.” A plaintiff’s attorney cited Mosseri’s earlier podcast remarks using the term “addiction,” but he said he had likely used the term casually.
“I’m not a medical expert, but someone very close to me has struggled with clinical addiction, which is why I’m careful with my words,” he said. He added that “problematic use” occurs when someone spends more time on Instagram than they feel comfortable with, which he acknowledged does happen.
“It’s not good for the company long-term to make decisions that benefit us but harm people’s well-being,” Mosseri said.
During testimony, Mosseri and plaintiff attorney Mark Lanier debated cosmetic filters on Instagram that alter appearances in ways some say encourage cosmetic surgery. Mosseri said the company aims to keep the platform as safe as possible while limiting censorship. Bereaved parents in the courtroom appeared visibly emotional during the discussion on body image and filters.
On cross-examination, Mosseri rejected suggestions that Instagram targets teens for profit. He said teens generate less revenue than other demographics because they click fewer ads and often lack disposable income. Lanier cited research showing that users who join social media at a young age are more likely to remain active, creating long-term profit potential.
“Often people frame it as safety versus revenue,” Mosseri said. “It’s hard to imagine a case where prioritizing safety isn’t also good for revenue.”
Instagram has introduced features aimed at improving safety for young users, but reports last year found teen accounts were recommended age-inappropriate sexual content and material related to self-harm and body image issues. Meta called the findings “misleading and dangerously speculative.”
Meta CEO Mark Zuckerberg is expected to testify next week. The company is also facing a separate trial in New Mexico that began this week.
9 days ago
Russia restricts access to Telegram, cites security concerns
Russian authorities have started limiting access to Telegram, one of the country’s most widely used messaging apps, as part of efforts to steer citizens toward state-controlled digital platforms.
On Tuesday, the government announced it was restricting Telegram to “protect Russian citizens,” accusing the platform of failing to remove content officials describe as criminal and extremist.
Russia’s communications watchdog, Roskomnadzor, said in a statement that restrictions on Telegram would remain in place “until violations of Russian law are eliminated.”
The regulator claimed that users’ personal data was not adequately protected and that the platform lacked effective measures to prevent fraud and the use of the service for criminal or extremist activities. Telegram has denied the allegations, saying it actively works to prevent abuse of its platform.
State news agency TASS reported that Telegram is facing fines totaling 64 million rubles, about 828,000 US dollars, for allegedly refusing to delete banned content and failing to comply with self-regulation requirements.
After the restrictions took effect on Tuesday, users across Russia reported significant disruptions. According to the monitoring website Downdetector, more than 11,000 complaints were filed in the past 24 hours, with many users saying the app was either inaccessible or operating more slowly than usual.
YouTube rolls out auto-dubbing globally with expanded language support
Telegram is widely used in Russia by millions of people, including members of the military, senior officials, state media and government institutions such as the Kremlin and Roskomnadzor itself.
Pavel Durov, Telegram’s Russian-born founder, said in a statement that the attempt to restrict the app would not succeed. He said Telegram stands for freedom of speech and privacy regardless of pressure.
Durov accused the Russian government of trying to push citizens toward a state-run messaging service designed for surveillance and political censorship. He noted that Iran had attempted a similar move eight years ago by banning Telegram in an effort to promote a government-backed alternative, but the strategy ultimately failed.
10 days ago
Bitcoin drops to lowest level in over a year
Bitcoin prices have dropped to their lowest level in about 16 months, despite strong public support for cryptocurrency from US President Donald Trump.
At one point, Bitcoin fell to around $60,000, the lowest since September 2024, before recovering slightly. The fall came after a long rally that pushed the digital currency to a record high of $122,200 in October 2025.
Joshua Chu, co-chair of the Hong Kong Web3 Association, said Reuters that investors who took big risks are now facing the reality of market ups and downs. He said the current situation is a reminder of how important risk management is in volatile markets.
Bitcoin had gained strong momentum over the past year, helped by Trump’s vocal backing of crypto and his promise to ease regulations on the sector. However, after Thursday’s drop, Bitcoin is now down about 32% over the past 12 months and is moving closer to price levels seen in early 2024 and 2021.
Bitcoin is the world’s largest and most well-known cryptocurrency. It is a form of digital money that is not controlled by any central bank or government.
Bitcoin surges past $118,000 for first time as Crypto momentum grows
According to the UK’s Financial Conduct Authority (FCA), about 8% of UK adults invested in crypto in 2025, down from the previous year. However, the average amount invested has increased, with many people now holding between £1,000 and £5,000 worth of digital assets.
After returning to the White House in January 2025, Trump signed an executive order aiming to make the US the world’s leading hub for cryptocurrency. He also launched his own crypto-related business ventures and continued involvement in family-owned crypto investment firms.
During his current term, the Trump administration has taken several pro-crypto steps, including reducing regulatory enforcement. Democrats, however, have criticised his approach, saying Trump has personally gained billions of dollars from crypto holdings and transactions.
Bitcoin briefly slips below $85,000 amid broad crypto downturn
Analysts say Bitcoin’s latest fall may be linked to Trump’s nomination of Kevin Warsh as the new head of the US Federal Reserve. Some investors expect tighter monetary policy, which usually puts pressure on assets like cryptocurrencies.
Deutsche Bank said Bitcoin has been falling for four months, with growing negative sentiment as traditional investors lose interest. While the bank does not expect crypto to disappear, it also does not see a quick return to past highs.
Other major cryptocurrencies, including Ethereum and Solana, have also fallen by about 37% so far this year. CoinGecko reports that the overall crypto market has lost more than $2 trillion in value since peaking in October.
With inputs from BBC.
14 days ago
YouTube rolls out auto-dubbing globally with expanded language support
YouTube has expanded its auto-dubbing feature worldwide, allowing creators to reach a broader global audience as the platform added support for 27 languages and introduced new tools to improve translated audio quality.
The video-sharing platform said auto-dubbing is now available to all users, marking a major step in reducing language barriers on YouTube. The company reported that in December 2025 alone, about six million daily viewers watched at least 10 minutes of auto-dubbed content, indicating growing adoption of the feature.
Under the expanded system, videos can now be automatically dubbed into English from a wide range of languages, including Arabic, Bengali, Chinese, Dutch, French, German, Hindi, Japanese, Korean, Malayalam, Portuguese, Russian, Spanish, Tamil, Telugu, Turkish, Urdu and Vietnamese, among others. Dubbing from English is currently supported in 20 languages, including Bengali, Hindi, French, German, Japanese, Korean, Portuguese and Spanish.
YouTube has also launched an “expressive speech” feature for channels in eight languages – English, French, German, Hindi, Indonesian, Italian, Portuguese and Spanish. The company said this tool is designed to better capture the original tone, emotion and energy of the speaker, making dubbed audio sound more natural.
Microsoft unveils AI Content Marketplace
In addition, YouTube has introduced a “preferred language” setting that gives users more control over how they consume content. While the platform still defaults language selection based on viewing history, users can now choose preferred languages so that videos originally uploaded in those languages will play without translation.
Acknowledging that dubbed videos may sometimes appear unnatural due to mismatched lip movements, YouTube said it is testing a lip-sync pilot feature that aligns translated audio with a speaker’s lip movements to create a more realistic viewing experience.
The company said creators have also been considered in the rollout. YouTube’s smart filtering technology can identify content that should not be dubbed, such as music videos or silent vlogs. According to the platform, auto-dubbing will not negatively affect a video’s discoverability and could help creators reach new audiences in other languages.
#With inputs from Hindustan Times
15 days ago
Malaysia imposes full ban on e-waste imports to stop illegal dumping
Malaysia has announced an immediate and complete ban on the import of electronic waste, declaring it will no longer allow itself to become a dumping ground for hazardous waste from abroad.
The Malaysian Anti-Corruption Commission (MACC) said late Wednesday that all electronic waste, or e-waste, has been reclassified under the “absolute prohibition” category with immediate effect. The move removes the discretion previously held by the Department of Environment to approve exemptions for importing certain types of e-waste.
MACC chief Azam Baki said e-waste imports are now strictly prohibited and pledged firm and coordinated enforcement to prevent illegal shipments from entering the country.
Malaysia has struggled for years with large volumes of imported e-waste, much of it suspected to be illegal and harmful to both human health and the environment. Authorities have seized hundreds of containers at ports in recent years and ordered many shipments to be returned to their countries of origin.
Spain moves to ban social media use for children under 16
Environmental groups have repeatedly called for tougher measures, warning that e-waste such as discarded computers, mobile phones and household appliances often contains toxic substances and heavy metals, including lead, mercury and cadmium, which can contaminate soil and water if mishandled.
The ban comes as authorities expand a corruption investigation linked to e-waste management. Last week, the MACC detained and remanded the director-general of the Department of Environment and his deputy over alleged abuse of power and corruption related to e-waste oversight. Investigators have also frozen bank accounts and seized cash connected to the case.
Meanwhile, Malaysia’s Home Ministry said in a social media post that the government would step up efforts to curb e-waste smuggling.
“Malaysia is not a dumping ground for the world’s waste,” the ministry said, adding that e-waste poses a serious threat to the environment, public health and national security.
16 days ago