Google Brain's AI Achieves State-of-the-Art Text Summarization Performance (venturebeat.com) 20
A Google Brain and Imperial College London team have built a system -- Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus -- that leverages Google's Transformers architecture combined with pretraining objectives tailored for abstractive text generation. From a report: They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows "surprising" performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples. As the researchers point out, text summarization aims to generate accurate and concise summaries from input documents, in contrast to executive techniques. Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent.
Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Google's AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection -- that's how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.
Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Google's AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection -- that's how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.
Re: (Score:1)
But it frees him up to golf more. #MakeAmericaAutomated!
Copyright question: (Score:4, Interesting)
Let's say I uploaded one of my reports to this machine and it generates the summary. Then I take that summary and paste it into my Executive Summary section.
Who owns the copyright of that summary?
Re: (Score:2)
Google owns it.
Just like everything else you do online.
Re: (Score:2)
Seems reasonable. *Checks "I agree to Terms of Service"*
Re: (Score:2)
Re: (Score:2)
Who owns the copyright of that summary?
Whoever has the most lawyers.
12 tasks (Score:1)
Re: (Score:2)
This is the kind of technology Google Cloud should be offering to it's customers. Not more of what everyone else already has.
Buzzwordificationism (Score:1)
Their buzzword shoehorning into recognizable acronyms rivals that of NASA probe naming. Maybe it's done with AI?
Re: Buzzwordificationism (Score:2)
Summarizes everything as ... (Score:3)
"Yadda, yadda, yadda."
Transformers (Score:2)
Despite what Google says, Tranformers are actually Robots in Disguise.
Summary of Everything? (Score:2)
"42".
Comment removed (Score:5, Funny)
"state of the art" ... Hollowest Phrase Of the Dec (Score:2)
It means that in a range of 0 - 100%, you are at 100. While leaving out what that 100% actually is. :)
Why would one do that? To hide that that 100% is actually really bad in absolute numbers, of course!
A disturbance in the Force... (Score:2)
Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent.
It's as if millions of technical writers cried out in terror and were suddenly silenced.
Summarize (Score:1)
How does it do on Proust?