Automated Stories: Using Algorithms to Craft News Content |
That there is a piece of StatSmack, an automatically generated snippet of text produced by Stat Sheet, designed for Brunonians like myself to trash talk on social media. Sure, no Pulitzer there, but it’s a creative application of text generation algorithms being used to create a new experience and opportunity to engage, directly driven from the data. Automated Insights is the parent company of Stat Sheet, and, like its competitor Narrative Science, it uses algorithms to automatically analyze structured data and produce readable texts, reports, and dashboards. New analytics services like Echobox are now coming online as well, producing readable and actionable pieces of editorial advice written in plain English, from nothing more than the stream of clicks and shares on your site. Other automation efforts have involved using algorithms to provide context for a story, an activity that journalists often engage in when making sense of an ongoing event.
My own recent research has look at automatically annotating charts and maps to help explain the context of outliers or salient trends. All of these techniques can enrich a data story and provide additional entry points and avenues for engagement with the content. Just because it’s automated doesn’t mean it’s robotic sounding either. A paper published just last week by Christer Clerwall showed in evaluations that readers couldn’t tell the difference between a football game recap written by Automated Insights and one written by a human journalist. If all of this piques your curiosity about algorithms and automated storytelling in the news, you’re in luck.
The Data Journalism Project is a project made possible by generous funding from both The Tow Foundation and the John S. and James L. Knight Foundation. The Data Journalism Project includes a wide range of academic research, teaching, public engagement and development of best practices in the field of data and computational journalism.
Automated Testing Tool
TestComplete Platform, which powers TestComplete Desktop, Web, and Mobile automated testing tools, helps you create repeatable and accurate automated tests across multiple devices, platforms, and environments easily and quickly. Novice testers can use record and playback feature, while scripted testing support is available for more experienced users. TestComplete Desktop has everything you need in an automated testing tools. Aside from powerful and robust testing features provided by TestComplete Platform, TestComplete Desktop comes with many functional testing capabilities and automated testing tools. Create robust tests without writing a single line of script code using TestComplete Platform’s point-and-click automated test recorder.
Extend TestComplete’s Platform to create automated desktop tests that meet your specific testing needs. TestComplete Platform offers numerous built-in keyword-driven testing operations that allow you to easily perform various automated software testing actions on desktop, web, or mobile applications. TestComplete Platform’s built-in Automated Test Visualizer captures and displays screenshots of each operation performed on the tested application, thereby providing a visual overview of the entire test flow. Use these automated testing tools to insert these checkpoints into your scripts or keyword tests during recording just by a simple drag and drop. Using TestComplete Platform’s powerful debugger, you can create breakpoints and even pause automated test execution manually.
TestComplete Platform generates detailed logs of all actions performed during automated testing. TestComplete allows you to add, run, and report on functional API tests created using SoapUI NG Pro and ServiceV Pro, functional API testing and service virtualization tools part of the Ready! API. With TestComplete, you also can add, run, and report on API tests created using SoapUI, which is a free and open source API testing tool.
ETS Research: Automated Scoring and Natural Language Processing
At ETS, our researchers have extensive experience in Natural Language Processing – a field that applies principles of linguistics and computer science to create computer applications that interact with human language. NLP technology is the basis for the automated scoring applications that we are developing to address the increasing demand for open-ended or constructed-response. ETS has been at the forefront of research in automated scoring of open-ended items for over two decades, with a long list of significant, peer-reviewed research publications as evidence of our activity in the field. ETS scientists have published on automated scoring issues in the major journals of the educational measurement, computational linguistics and language testing fields. The topic of automated constructed-response scoring has begun to receive substantial attention in the context of discussions related to assessment reform in the United States, as ETS measurement professionals and their colleagues in other assessment organizations noted in the recent report, Automated Scoring for the Assessment of Common Core Standards.
Automated scores are consistent with the scores from expert human graders. The way automated scores are produced is understandable and substantively meaningful. Automated scores have been validated against external measures in the same way as is done with human scoring. The impact of automated scoring on reported scores is understood. ETS is committed to developing automated scoring systems to meet these conditions, and evaluating them accordingly.
Responsible application of automated scoring requires evaluation of all five conditions; using agreement with human raters as the sole basis for assessing the performance of a scoring system can misrepresent the effects of introducing it into large-scale operational use. View a comprehensive list of publications related to automated scoring and natural language processing.