Content Analysis Tool
A content inventory is a key starting point for website content planning, auditing, project scoping and estimation, content management, and ongoing website content tracking. Creating content inventories can be tedious, time-consuming, and expensive. Until now, content strategists, user experience architects, and content managers have had to struggle with inadequate, clunky tools that require many hours of post-processing work to create usable reports. The Content Analysis Tool is purpose-built to create usable, detailed, automated content inventories. Designed with an easy-to-use dashboard interface, CAT allows users and administrators to manage multiple content inventory projects, quickly and easily generating a rich set of data to enable deeper analysis.
CAT crawls your website to return a rich set of data about every page. CAT presents your data in an easy-to-use dashboard that allows you to quickly and easily. CAT includes integration with Google Analytics, allowing you to include valuable performance data right in your job details for an integrated analysis experience. With search functionality now built into the CAT dashboard, you can quickly and easily search for any string-URLs, URL fragments, file names, keywords, and more-and find matches in job details, resource details, or both. The Technology Behind CAT.
CAT’s technology is rooted in the idea that getting clean and complete data and analyzing it effectively is crucial to content analysis. You can even track changes to your site over time, because the diffing feature in CAT lets you compare reports so you can easily see what’s changed and get a quick snapshot of your site’s status. Learn more about how using an automated content inventory can help you move from data analysis to content analysis faster and easier.
ETS Research: Automated Scoring of Written Content
Controlling for these factors, open-ended written answers are often preferable to multiple-choice responses when it comes to assessing content knowledge. At ETS, we have been conducting significant research on accurately scoring the content of written responses for more than a decade. This approach requires significant human effort to describe the key concepts that the automated scoring system should find in a correct response to each item. This approach represents the state of the art in computational linguistics and related fields, and it draws from extensive research that ETS has conducted on automated content scoring. In addition to assessing whether test takers understand a concept, content scoring may be used to evaluate whether a writer has successfully used source material – for example, test questions that require students to read one or more passages and include relevant information from these sources in an effective response.
Our current research also focuses on extending the range of applications for automated content assessment. We are investigating how best to use automated systems to provide feedback – for example, on content knowledge or use of sources – in the classroom or in online classes. Below are some recent or significant publications that our researchers have authored on the subject of automated scoring of written content. This study tested a concept-based scoring tool for automated content-based scoring. This paper describes a system for automated scoring of short answers, which was the aim of the 2013 Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge.
The article also discusses implications for development of automated essay scoring systems. View more research publications related to automated scoring of written content.
Bureau of Alcohol, Tobacco, Firearms and Explosives
Firearms examiners are able to examine bullets and cartridge casings to determine if they were expelled from the same firearm. It has been a tedious, time-consuming process for firearms examiners to compare suspect bullets and cartridge casings recovered at crime scenes or from a recovered firearm to the vast inventory of recovered or test-fired projectiles and casings. The severe stress and eye strain on the firearms examiner slow the process. Statistics collected by ATF’s National Tracing Center indicate that revolvers and semiautomatic firearms are traced with nearly equal frequency by law enforcement agencies. The ballistic comparison system does not positively identify bullets or casings fired from the same weapon – that must be done by a firearms examiner.
The best evidence in linking a firearm to a specific crime is matching the recovered projectile and cartridge casings to the suspect firearm. Either a firearms examiner or a technician can be trained to enter data from crime scene bullets and casings. The firearms examiner’s only contact with the system can be after all the data has been entered and correlated. At this point, the firearms examiner will review the scores and view only those with significant scores at the Signature Analysis Station. As stated earlier, the system does not make identifications; the firearms examiner must make the identification if two bullets or cartridge cases come from the same firearm.
Since, the system produces a list of scores that indicate the relative and quantitative probability of a match, the firearms examiner can retrieve selected images for evaluation on the video screen. If the image on the screen looks as though a match could exist, the firearms examiner inspects the specimens on a comparison microscope to confirm the match.