AI

Microsoft Builds a Chat Bot To Match Patients To Clinical Trials (bloomberg.com) 9

Microsoft has built a "Clinical Trials Bot" that does just as the name implies: suggests links to trials that best match the patients' needs. The bot "lets patients and doctors search for studies related to a disease and then answer a succession of text questions," reports Bloomberg. "Drugmakers can also use it to find test subjects." From the report: Microsoft won't release the bot as its own product. Instead, the software giant is talking to pharmaceutical companies that it hopes will use the bot to find trial participants, while pitching it to other partners that could turn the technology into a tool for patients, said Hadas Bitran, group manager of Microsoft Healthcare Israel. Bitran declined to name possible partners because no deals have been agreed yet. The project is part of a larger Microsoft health-care bot initiative that's helped partners build automatic chat programs for things like triaging patients and answering questions about insurance benefits. The clinical trials bot was accepted as part of the U.S. White House Presidential Innovation Fellows program and Bitran showed it on Thursday at a closed door event at the White House. She will demonstrate the tech publicly on Friday during a session at the U.S. Census Bureau.

The technology uses a form of artificial intelligence called machine reading to ingest the selection criteria for each clinical trial. It uses this data to decide which questions to ask patients and how to match their answers to suitable trials. Here's how it works for patients: They type in a search, such as "trials for a 52-year old California female with breast cancer." The bot responds with questions such as whether the patient had chemotherapy for metastatic disease -- a cancer that has spread -- and how far the patient can travel. It offers five choices that describe the patient's current health and ability to be active and care for herself. As the patient selects from the multiple-choice answers, the software generates the next question and refines the list of available trials.

Medicine

Eating Processed Foods Tied To Shorter Life, Study Suggests (theguardian.com) 243

An anonymous reader quotes a report from The New York Times: The study, in JAMA Internal Medicine, tracked diet and health over eight years in more than 44,000 French men and women. Their average age was 58 at the start. About 29 percent of their energy intake was ultraprocessed foods. Such foods include instant noodles and soups, breakfast cereals, energy bars and drinks, chicken nuggets and many other ready-made meals and packaged snacks containing numerous ingredients and manufactured using industrial processes. There were 602 deaths over the course of the study, mostly from cancer and cardiovascular disease. Even after adjusting for many health, socioeconomic and behavioral characteristics, including scores on a scale of compliance with a healthy diet, the study found that for every 10 percent increase in ultraprocessed food consumption, there was a 14 percent increase in the risk of death (Warning: source may be paywalled; alternative source). The authors suggest that high-temperature processing may form contaminants, that additives may be carcinogenic, and that the packaging of prepared foods can lead to contamination.
Social Networks

Twitter Still Can't Keep Up With Its Flood of Junk Accounts, Study Finds (wired.com) 39

According to a new 16-month study of 1.5 billion tweets, researchers write that Twitter still isn't keeping up with the flood of automated accounts designed to spread spam, inflate follower counts, and game trending topics. Wired reports: In a 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi identified more than 167,000 apps using Twitter's API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns. They write that more than 60 percent of the time, Twitter waited for those apps to send more than 100 tweets before identifying them as abusive; the researchers' own detection method had flagged the vast majority of the malicious apps after just a handful of tweets. For about 40 percent of the apps the pair checked, Twitter seemed to take more than a month longer than the study's method to spot an app's abusive tweeting. That lag time, they estimate, allows abusive apps to cumulatively churn out tens of millions of tweets per month before they're banned.

The researchers say they've been sharing their results with Twitter for more than a year but that the company hasn't asked for further details of their method or data. When WIRED reached out to Twitter, the company expressed appreciation for the study's goals but objected to its findings, arguing that the Iowa researchers lacked the full picture of how it's fighting abusive accounts. "Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies," a spokesperson wrote.

Security

A Flaw Found in E-Ticketing Systems Used By at Least Eight Airlines Could Be Exploited To Access Sensitive Information About Travelers (betanews.com) 15

Eight airlines, including Southwest, use e-ticketing systems that could allow hackers to access sensitive information about travelers merely by intercepting emails, according to research published Wednesday by the mobile security company Wandera. From a news writeup: Researchers at security and data management company Wandera have uncovered a vulnerability affecting a number of e-ticketing systems that could allow third parties to view, and in some cases even change, a user's flight booking details, or print their boarding passes. The problem affects a number of major airlines including Southwest, Air France, KLM and Thomas Cook.

All of these have sent unencrypted check-in links to passengers. On clicking these links, a passenger is directed to a site where they are logged in automatically to the check-in for their flight, and in some cases they can then make changes to their booking.

Science

New Technique Allows Scientists To Create Materials That Get Stronger With More Use (newatlas.com) 31

Scientists at Hokkaido University have found a way to create materials that actually get stronger the more you use them. "By mimicking the mechanism that allows living muscles to grow and strengthen after exercise, the team led by Jian Ping Gong developed a polymer that breaks down under mechanical stress, then regrows itself into a stronger configuration by feeding off a nutrient bath," reports New Atlas. From the report: To achieve this, the Hokkaido team used what is called double-network hydrogels. Like other hydrogels, these are polymers that are 85 percent water by weight, but in this case, the material consist of both a rigid, brittle polymer and a soft, stretchable one. In this way, the finished product is both soft and tough. However, the clever bit is that under laboratory conditions the hydrogel was immersed in a bath of monomers, which are the individual molecular links that make up a polymer. These serve the same function in the muscle-mimicking material as amino acids do in living tissue.

According to the team, when the hydrogel is stretched, some of the brittle polymer chains break, creating a chemical species called "mechanoradicals" at the end of the broken polymer chains. These are very reactive and quickly join up with the floating monomers to form a new, stronger polymer chain. Under testing, the hydrogel acted much like muscles under strength training. It became 1.5 times stronger, 23 times stiffer, and increased in weight by 86 percent. It was even possible to control the properties of the material by using heat-sensitive monomers and applying high temperatures to make it more water resistant. Gong says this approach could lead to materials suitable for a variety of applications, such as in flexible exosuits for patients with skeletal injuries that become stronger with use.
The study has been published in the journal Science. For those interested, the researchers have published a video discussing the new hydrogel material.
Microsoft

Windows Setup Error Messages Will Soon Actually Help Fix Problems (arstechnica.com) 69

An anonymous reader quotes a report from Ars Technica: The next major Windows release, the Windows 10 April 2019 Update (codenamed 19H1), is going to offer some significant improvements [to error messages]. Microsoft described them on its Windows Insider webcast, and they were spotted initially by WinFuture. Currently, the best case during installation is something like this screen.

The message says that an incompatible application is detected, and a Knowledge Base article is referenced. It turns out that most Windows users don't know what "KBxxxxxxx" actually means, and the article isn't hyperlinked to make accessing it any easier. Issues detected through the other setup experience aren't much better. Windows will offer to uninstall problem applications, but often the better solution is to upgrade the application in question. The new setup process aims to be both more informative and more useful. The general approach is to allow decisions to be made within the setup program where possible and to put meaningful descriptions in the error messages, rather than leaving people with just a KB number to go on. Further, the "learn more" links will take you directly to the relevant Knowledge Base article, rather than hoping that end users know what "KBxxxxxxxx" means. Third-party developers will also be able to provide information about upgrades and updates when applicable to resolving compatibility issues.

Science

Those Opposed To Scientific Consensus Bolstered By 'Illusion of Knowledge' (edmontonjournal.com) 432

The Edmonton Journal reports: Recently, researchers asked more than 2,000 American and European adults their thoughts about genetically modified foods. They also asked them how much they thought they understood about GM foods, and a series of 15 true-false questions to test how much they actually knew about genetics and science in general. The researchers were interested in studying a perverse human phenomenon: People tend to be lousy judges of how much they know. Across four studies conducted in three countries -- the U.S., France and Germany -- the researchers found that extreme opponents of genetically modified foods "display a lack of insight into how much they know." They know the least, but think they know the most. "The less people know," the authors conclude, "the more opposed they are to the scientific consensus."

Science communicators have made concerted efforts to educate the public with an eye to bringing their attitudes in line with the experts," they write in the journal Nature Human Behaviour. But people with an inflated sense of what they actually know -- and most in need of education -- are also the ones least likely to be open to new information.... Extreme views often come along with not appreciating the complexity of the subject -- "not realizing how much there is to know," said Philip Fernbach, lead author of the new study and a professor of marketing at the University of Colorado Boulder. "People who don't know very much think they know a lot, and that is the basis for their extreme views."

Slashdot reader Layzej links to Rational Wiki's article on "The Backfire Effect," to illustrate Fernbach's observation that "People double down on their 'counter-scientific consensus attitudes'.

"Epecially when people feel threatened or if they are being treated as if they are stupid."
Youtube

YouTube To Curb Conspiracy Theory Video Recommendations (venturebeat.com) 271

YouTube said today that it is retooling its recommendation algorithm that suggests new videos to users in order to prevent promoting conspiracies and false information, reflecting a growing willingness to quell misinformation on the world's largest video platform after several public missteps. From a report: These recommendations all too often serve up unsavory content: ludicrous conspiracy theories about mass-shooting events being staged, far-fetched proclamations that the moon landing never happened, and hare-brained notions that the Earth on which we live is, well, flat. Moving forward, YouTube promises that you'll see less of those kinds of videos. This is similar to moves it's made in the past to reduce clickbaity recommendations, or videos that are slight variations on something else you've watched.

"We'll continue that work this year, including taking a closer look at how we can reduce the spread of content that comes close to -- but doesn't quite cross the line of -- violating our Community Guidelines," YouTube said in a blog post. "While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community."

United States

US Will Seek Extradition of Huawei CFO From Canada (reuters.com) 156

An anonymous reader quotes a report from Reuters: The U.S. Justice Department said on Tuesday it will pursue the extradition of the chief financial officer of China's Huawei, arrested in Canada in December. The United States has accused Huawei CFO Meng Wanzhou of misrepresenting the company's links to a firm that tried to sell equipment to Iran despite U.S. sanctions. The arrest soured relations between Canada and China, with China subsequently detaining two Canadian citizens and sentencing a third to death. The United States must file a formal request for extradition by Jan. 30. Once a formal request is received, a Canadian court has 30 days to determine whether there is enough evidence to support extradition and the Canadian minister of justice must issue a formal order. Canada has not asked the United States to abandon its bid to have Huawei executive Meng Wanzhou extradited, Canada's Foreign Minister Chrystia Freeland said in an interview with Bloomberg TV. "We will continue to pursue the extradition of defendant Ms. Meng Wanzhou, and will meet all deadlines set by the U.S./Canada Extradition Treaty," Justice Department spokesman Marc Raimondi said in a statement. "We greatly appreciate Canada's continuing support of our mutual efforts to enforce the rule of law."

Slashdot reader AmiMoJo shares a separate report from the BBC: The chairman of Chinese tech giant Huawei has warned his company could shift away from the U.S. and the U.K. if it continues to face restrictions. Huawei has been under scrutiny by Western governments, which fear its products could be used for spying. Speaking at the World Economic Forum, in Davos, Mr Liang Hua said his firm might transfer technology to countries "where we are welcomed." Huawei makes smartphones but is also a world leader in telecoms infrastructure, in particular the next generation of mobile phone networks, known as 5G.
Communications

Facebook Appears To Be Quietly Building Laser Satellites For Global Communications (ieee.org) 53

The snow-dusted peak of Mount Wilson in California has been home to many famous observatories. Until 1949, its 100-inch (2.5-meter) Hooker telescope was the largest aperture telescope in the world, and in 2004, its CHARA array became the world's largest optical interferometer. Now, two new observatories are being built there that, while not focused on the stars, might prove equally historic. They could house Facebook's first laser communications systems designed to connect to satellites in orbit. IEEE Spectrum reports: Construction permits issued by the County of Los Angeles show that a small company called PointView Tech is building two detached observatories on the mountain peak. PointView is the company that IEEE Spectrum revealed last year to be a previously unknown subsidiary of Facebook working on an experimental satellite called Athena. In April, PointView sought permission from the U.S. Federal Communications Commission to test whether E-band radio signals could "be used for the provision of fixed and mobile broadband access in unserved and underserved areas."

That application was still pending at the FCC before the current U.S. federal government shutdown took effect, but it and other public documents and presentations now strongly suggest that PointView is planning to utilize laser technology, possibly both in Athena and future spacecraft. Facebook has long been interested in free space optical, or laser, communication technology. Lasers are able to support much higher data rates than radio transmitters for a given input power, and their signals are largely immune to interference or hacking, although clouds can be problematic. Although Facebook developed millimeter-wave E-band links for its stratospheric Aquila drones, it was also experimenting with air-to-ground laser communications before it canceled its drone program last June. The laser tests, which used technology supplied by German company Mynaric, succeeded in establishing 10-gigabit-per-second links between a ground station and a light aircraft flying overhead.

EU

Dutch Surgeon Wins Landmark 'Right To Be Forgotten' Case (theguardian.com) 250

AmiMoJo shares a report from The Guardian: A Dutch surgeon formally disciplined for her medical negligence has won a legal action to remove Google search results about her case in a landmark "right to be forgotten" ruling. The doctor's registration on the register of healthcare professionals was initially suspended by a disciplinary panel because of her postoperative care of a patient. After an appeal, this was changed to a conditional suspension under which she was allowed to continue to practice. But the first results after entering the doctor's name in Google continued to be links to a website containing an unofficial blacklist, which it was claimed amounted to "digital pillory." It was heard that potential patients had found the blacklist on Google and discussed the case on a web forum. The surgeon's lawyer, Willem van Lynden, said the ruling was groundbreaking in ensuring doctors would no longer be judged by Google on their fitness to practice. "Now they will have to bring down thousands of pages: that is what will happen, in my view. There is a medical disciplinary panel but Google have been the judge until now. They have decided whether to take a page down -- and why do they have that position?" Van Lynden said.
Wikipedia

Happy 18th Birthday, Wikipedia (washingtonpost.com) 85

This week, Wikipedia celebrates its 18th birthday. If the massive crowdsourced encyclopedia project were human, then in most countries, it would just now be considered a legal adult. But in truth, the free online encyclopedia has long played the role of the Internet's good grown-up. From a story: Wikipedia has grown enormously since its inception: It now boasts 5.7 million articles in English and pulled in 92 billion page views last year. The site has also undergone a major reputation change. If you ask Siri, Alexa or Google Home a general-knowledge question, it will likely pull the response from Wikipedia. The online encyclopedia has been cited in more than 400 judicial opinions, according to a 2010 paper in the Yale Journal of Law & Technology.

Many professors are ditching the traditional writing assignment and instead asking students to expand or create a Wikipedia article on the topic. And YouTube Chief Executive Susan Wojcicki announced a plan last March to pair misleading conspiracy videos with links to corresponding articles from Wikipedia. Facebook has also released a feature using Wikipedia's content to provide users more information about the publication source for articles in their feed.

Twitter

Do Social Media Bots Have a Right To Free Speech? (thebulletin.org) 170

One study found that 66% of tweets with links were posted by "suspected bots" -- with an even higher percentage for certain kinds of content. Now a new California law will require bots to disclose that they are bots.

But does that violate the bots' freedom of speech, asks Laurent Sacharoff, a law professor at the University of Arkansas. "Even though bots are abstract entities, we might think of them as having free speech rights to the extent that they are promoting or promulgating useful information for the rest of us," Sacharoff says. "That's one theory of why a bot would have a First Amendment free speech right, almost independent of its creators." Alternatively, the bots could just be viewed as direct extensions of their human creators. In either case -- whether because of an independent right to free speech or because of a human creator's right -- Sacharoff says, "you can get to one or another nature of bots having some kind of free speech right."

In previous Bulletin coverage, the author of the new California law dismisses the idea that the law violates free speech rights. State Sen. Robert Hertzberg says anonymous marketing and electioneering bots are committing fraud. "My point is, you can say whatever the heck you want," Hertzberg says. "I don't want to control one bit of the content of what's being said. Zero, zero, zero, zero, zero, zero. All I want is for the person who has to hear the content to know it comes from a computer. To me, that's a fraud element versus a free speech element."

Sacharoff believes that the issue of bots and their potential First Amendment rights may one day have its day in court. Campaigns, he says, will find that bots are helpful and that their "usefulness derives from the fact that they don't have to disclose that they're bots. If some account is retweeting something, if they have to say, 'I'm a bot' every time, then it's less effective. So sure I can see some campaign seeking a declaratory judgment that the law is invalid," he says. "Ditto, I guess, [for] selling stuff on the commercial side."

Google

Google Wins Round in Fight Against Global Right To Be Forgotten (bloomberg.com) 66

Google shouldn't have to apply the so-called right to be forgotten globally, an adviser to the EU's top court said in a boost for the U.S. giant's fight with a French privacy regulator over where to draw the line between privacy and freedom of speech. From a report: While backing Google's stance, Advocate General Maciej Szpunar of the EU Court of Justice said that search engine operators must take every measure available to remove access to links to outdated or irrelevant information about a person on request. The Luxembourg-based court follows such advice in a majority of its final rulings, which normally come a few months after the opinions.

Google has been fighting efforts led by France's privacy watchdog to globalize the right to be forgotten, which was created by the EU court in a landmark ruling in 2014, without defining how, when and where search engine operators should remove links. This has triggered a wave of legal challenges. The Alphabet unit currently removes such links EU-wide and since 2016 it also restricts access to such information on non-EU Google sites when accessed from the EU country where the person concerned by the information is located -- referred to as geo-blocking. This approach was backed by Szpunar.

Communications

People Older Than 65 Share the Most Fake News, Study Finds (theverge.com) 403

An anonymous reader quotes a report from The Verge: Older Americans are disproportionately more likely to share fake news on Facebook, according to a new analysis by researchers at New York and Princeton Universities. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic -- including party affiliation. Today's study, published in Science Advances, examined user behavior in the months before and after the 2016 U.S. presidential election. In early 2016, the academics started working with research firm YouGov to assemble a panel of 3,500 people, which included both Facebook users and non-users. On November 16th, just after the election, they asked Facebook users on the panel to install an application that allowed them to share data including public profile fields, religious and political views, posts to their own timelines, and the pages that they followed. Users could opt in or out of sharing individual categories of data, and researchers did not have access to the News Feeds or data about their friends.

About 49 percent of study participants who used Facebook agreed to share their profile data. Researchers then checked links posted to their timelines against a list of web domains that have historically shared fake news, as compiled by BuzzFeed reporter Craig Silverman. Later, they checked the links against four other lists of fake news stories and domains to see whether the results would be consistent. Across all age categories, sharing fake news was a relatively rare category. Only 8.5 percent of users in the study shared at least one link from a fake news site. Users who identified as conservative were more likely than users who identified as liberal to share fake news: 18 percent of Republicans shared links to fake news sites, compared to less than 4 percent of Democrats. The researchers attributed this finding largely to studies showing that in 2016, fake news overwhelmingly served to promote Trump's candidacy. But older users skewed the findings: 11 percent of users older than 65 shared a hoax, while just 3 percent of users 18 to 29 did. Facebook users ages 65 and older shared more than twice as many fake news articles than the next-oldest age group of 45 to 65, and nearly seven times as many fake news articles as the youngest age group (18 to 29).
As for why, researchers believe older people lack the digital literacy skills of their younger counterparts. They also say that people experience cognitive decline as they age, making them likelier to fall for hoaxes.
Google

Google Search Results Listings Can Be Manipulated For Propaganda (zdnet.com) 44

A feature of the Google search engine lets threat actors alter search results in a way that could be used to push political propaganda, oppressive views, or promote fake news. From a report: The feature is known as the "knowledge panel" and is a box that usually appears at the right side of the search results, usually highlighting the main search result for a very specific query. For example, searching for Barack Obama would bring a box showing information from Barack Obama's Wikipedia page, along with links to the former president's social media profiles. But Wietze Beukema, a member of PwC's Cyber Threat Detection & Response team, has discovered that you can hijack these knowledge panels and add them to any search query, sometimes in a way that pushes legitimate search results way down the page, highlighting an incorrect result and making it look legitimate. The way this can be done is by first searching for a legitimate item, and pressing the "share" icon that appears inside a knowledge panel.
Privacy

Germany Reportedly Seeks US Assistance After Hacking Breach (bloomberg.com) 40

German authorities sought help from the U.S. National Security Agency after discovering that hackers had released private data linked to Chancellor Angela Merkel and hundreds of other German politicians, Bild newspaper reported. From a report: Responding to the biggest data dump of its kind in the country, German investigators wanted the U.S. intelligence agency to lean on Twitter to shut down profiles with links to the data, Bild said, citing unidentified security officials. German authorities argued that U.S. citizens were among thousands of people exposed by the data dump. As investigators seek to find out how data including email addresses, mobile phone numbers and private chat protocols were exposed, politicians took aim at Germany's Federal Office for Information Security, known as BSI, for failing to respond after receiving initial indications in December.
Communications

Facebook's WhatsApp Has an Encrypted Child Porn Problem (techcrunch.com) 156

Videos and pictures of children being subjected to sexual abuse are being openly shared on Facebook's WhatsApp on a vast scale, with the encrypted messaging service failing to curb the problem despite banning thousands of accounts every day. From a report: Without the necessary number of human moderators, the disturbing content is slipping by WhatsApp's automated systems. A report reviewed by TechCrunch from two Israeli NGOs details how third-party apps for discovering WhatsApp groups include "Adult" sections that offer invite links to join rings of users trading images of child exploitation. TechCrunch has reviewed materials showing many of these groups are currently active.

TechCrunch's investigation shows that Facebook could do more to police WhatsApp and remove this kind of content. Even without technical solutions that would require a weakening of encryption, WhatsApp's moderators should have been able to find these groups and put a stop to them. Groups with names like "child porn only no adv" and "child porn xvideos" found on the group discovery app "Group Links For Whats" by Lisa Studio don't even attempt to hide their nature.

Better manual investigation of these group discovery apps and WhatsApp itself should have immediately led these groups to be deleted and their members banned. While Facebook doubled its moderation staff from 10,000 to 20,000 in 2018 to crack down on election interference, bullying, and other policy violations, that staff does not moderate WhatsApp content. With just 300 employees, WhatsApp runs semi-independently, and the company confirms it handles its own moderation efforts. That's proving inadequate for policing at 1.5 billion user community.
It's a similar problem that WhatsApp, used by more than a billion users, is facing in developing markets where its service is being used to spread false information.
Piracy

Swedish ISP Bahnhof Fights Sci-Hub Blocking Order (torrentfreak.com) 53

thomst writes: "After being ordered to block a number of piracy-related domains following a complaint from academic publisher Elsevier, Swedish ISP Bahnhof retaliated by semi-blocking Elsevier's own website and barring the court from visiting Bahnhof.se," reports TorrentFreak. "Those actions have now prompted Sweden's telecoms watchdog to initiate an inquiry to determine whether the ISP breached net neutrality rules."

Bahnhof is under investigation for diverting its users who attempt to click on links to Elsevier -- the complainant in the case -- to a page that explains the giant journal publisher forced the ISP to block access to a number of Sci-Hub domains, via a court order it doesn't have the resources to fight. That page includes a link to Elsevier that Bahnhof doesn't intercept. So, is it reasonable for Bahnhof to divert its users to a "fuck you" page, rather than allowing them to freely access Elsevier?

Google

Google's Secret China Project 'Effectively Ended' After Internal Confrontation: Report (theintercept.com) 82

Less than five months after Google's plan to build a censored search engine and other tools for the Chinese market became public, the company has "effectively ended" the project, reports The Intercept. From the report: Google has been forced to shut down a data analysis system it was using to develop a censored search engine for China after members of the company's privacy team raised internal complaints that it had been kept secret from them, The Intercept has learned. The internal rift over the system has had massive ramifications, effectively ending work on the censored search engine, known as Dragonfly, according to two sources familiar with the plans. The incident represents a major blow to top Google executives, including CEO Sundar Pichai, who have over the last two years made the China project one of their main priorities.

The dispute began in mid-August, when the The Intercept revealed that Google employees working on Dragonfly had been using a Beijing-based website to help develop blacklists for the censored search engine, which was designed to block out broad categories of information related to democracy, human rights, and peaceful protest, in accordance with strict rules on censorship in China that are enforced by the country's authoritarian Communist Party government.

Slashdot Top Deals