Blog

New system cleans messy data tables automatically | MIT News

0

MIT researchers have created a new system, which automatically cleans up “dirty data” by data analysts, data engineers, and data scientists to intimidate typos, duplicates, missing values, misspellings, and anomalies. The system, called PClean, is the latest in a series of domain-specific probabilistic programming languages ​​written by researchers in the probabilistic computing project, aimed at simplifying and automating the development of AI applications (others include for one) 3D perception through inverted graphics And for another Modeling Time Series and Databases) Belongs to.

According to surveys conducted by Anaconda and Figure Eight, data cleaning can take up to a quarter of the data scientist’s time. Automating the task is challenging because different datasets require different types of cleaning, and common sense decision calls about objects in the world are often needed (for example, in any of the many cities called “Beverly Hills” lives in). PClean provides common common-sense models for such decision calls that can be adapted to specific databases and types of errors.

PClean uses a knowledge-based approach to automate the data cleaning process: background knowledge about user databases and what types of issues may appear. For example, the problem of cleaning up state names in a database of apartment listings. What if someone said that they live in Beverly Hills, but left the State Pillar empty? Although California has a famous Beverly Hills, Florida, Missouri and Texas also have one … and there is a neighborhood in Baltimore known as Beverly Hills. How can you know where the person lives? This is where expressive scripting language of PClean comes. Users can give PClean background information about the domain and how the data can be corrupted. PClean combines this knowledge through general knowledge probabilistic reasoning to come up with answers. For example, given additional knowledge about specific rents, PClean rightly raises Beverly Hills, because of the high cost of rents in California, where the defendant resides.

Alex Lew, the paper’s lead author and PhD student in the Department of Electrical Engineering and Computer Science (EECS), says he is most excited that PCLN helps computers to get help in the same way that people seek help from one There is one more. “When I ask a friend for help with something, it is often easier to ask the computer. This is because in today’s major programming languages, I have to provide step-by-step instructions, which cannot assume that the computer has any reference to the world or function – or even common sense reasoning ability is also. With a human, I accept all those things, ”he says. “PClean is a step towards closing that gap. It tells me computer what I know about a problem, the same kind of background knowledge encoding I would explain to a person who would help me clean my data. I can also give PClean hints, tips and tricks, which I have already discovered to solve the task faster. “

The co-authors are Monica Agarwal, a PhD student at EECS; David Sontague, an Associate Professor at EECS; And development. Mansinghka, a leading research scientist in the Department of Brain and Cognitive Sciences.

What innovations allow it to work?

On the basis of declaration probabilistic cleaning, generic knowledge can potentially provide greater accuracy than machine learning 2003 paper Hannah Pasula and others from the laboratory of Stuart Russell at the University of California at Berkeley. “Ensuring data quality in the real world is a huge problem, and almost all current solutions are ad hoc, expensive, and error-prone,” says Russell, a professor of computer science at UC Berkeley. “PCLine is the first scalable, well-engineered, general-purpose solution based on generic data modeling that needs to be the right way to go. The results themselves call their own story. “Co-author Aggarwal states that” existing data cleaning methods are more constrained in their expression, which may be more user-friendly, but at the cost of being significantly more limited. In addition, we found that PClean can scale to very large datasets that have unrealistic runtimes under the current system. “

PClean builds on recent advances in probabilistic programming, including one New AI Programming Model MIT has a built in Probabilistic Computing project that makes it very easy to apply realistic models of human knowledge to interpret data. PClean repair is based on Bayesian logic, an approach that weighs alternative explanations of fuzzy data that apply probability based on prior knowledge to the data at hand. “The ability to make these kinds of uncertain decisions, where we want to tell the computer what kinds of things it is likely to see, and the computer uses it automatically to find out what the correct answer is, centrally. Is probable programming, ”says Lew.

PClean is the first Bayesian data-cleaning system that can combine domain expertise with common-sense logic to automatically clean databases of millions of billions of records. PClean achieves this scale through three innovations. First, PClean’s scripting language tells users what they know. It also gives accurate models for complex databases. Second, PClean’s invocation algorithm uses a two-step approach, informing one-on-one times based on processing records how to clean them, then revisiting your decision call to fix the mistakes. This gives strong, accurate estimation results. Third, PCL provides a custom compiler that generates fast estimation code. This allows PCL to run on a million-record database with greater speed than many recording approaches. Mansinghka says, “PCL users can signal PCLNs to reason more effectively about their databases, and tune its performance – mainly in contrast to potential programming algorithms, which are primarily generic intrans Relied on algorithms. ”

Like all possible programs, the lines of code required for the tool to work are much shorter than alternative state-of-the-art options: PCLN programs require only 50 lines of code to outperform benchmarks in terms of accuracy and runtime. . For comparison, a simple snake takes twice as many lines of code to run a cellphone game, and Minecraft comes in over 1 million lines of code.

In their paper presented at the 2021 Society for Article Intelligence and Statistics Conference, the authors have shown the ability to scale to millions of record-producing datasets using PCLNs. Dataset. Lasting just seven and a half hours, PClean received more than 8,000 errors. The authors then verified by hand (through searches on hospital websites and doctor LinkedIn pages) that for more than 96 percent of them, PClean’s proposed determination was correct.

Since PClean is based on Bayesian probability, it can also give a calibrated estimate of its uncertainty. “It can sustain many hypotheses – giving you graded judgment, not just a yes / no answer. This builds trust and helps users override PClean when necessary. For example, you can see a verdict where the PCL was uncertain, and state it the correct answer. You can then update the remainder of your decision in view of your response, “says Mansinghka.” We think that that kind of interactive process has a lot of potential value that interleaves human decision with machine decision. We see PClean as an early example of a new type of AI system that can be told more about what people know when it is uncertain, and the reason more useful, human-like methods. Interacts with people from. “

David Pifau, a senior research scientist at DeepMind Noted in a tweet That PClean fulfills a business need: “When you recognize that most business data is not images of dogs outside, but entries in relational databases and spreadsheets, it’s surprising that things like this have yet to be Success is not to be learned. “

Benefits, Risk and Regulation

Without large-scale investment in human and software systems, PCLion makes it cheaper and easier to join dirty, inconsistent databases that data-intensive companies currently rely on. This has potential social benefits – but also risks, among them that PClean can make it cheaper and easier to invade people’s privacy, and potentially even Don’t name them, By adding incomplete information from several public sources.

“Ultimately, we need very strong data, AI and privacy regulation to reduce these types of harm,” says Mansinghka. Lew says, “Compared to a machine-learning approach to data cleaning, PClean may allow finer-grained regulatory controls. For example, PClean can not only tell us that it has referenced the same person Merged the two records, but also added. ” It did – and I can come to my decision on whether or not I agree. I can ask PCL to consider some reasons for merging only two entries. “Unfortunately, says the risqué, privacy concerns remain, no matter how clean the dataset is.

Mansinghka and Lew are excited to help people pursue socially beneficial applications. They have been approached by those who want to use PCLNs to improve data quality for journalism and humanitarian applications, such as anticorporation monitoring and strengthening donor records submitted for elections to state boards. Agarwal says she hopes the PCLN will eliminate data scientists’ time, “to focus on the problems they care about rather than cleaning up the data.” Initial feedback and enthusiasm around PCL suggest that this may be the case, which we are excited to hear. “

.

Source

MTI

Google Assistant Family Bells and Broadcasts Get Sweet New Features

Previous article

Pinterest will test livestreamed events with three days of creator workshops

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *

More in Blog