Following up on Stefanie’s recap of unconf 17, we are following up this entire week with summaries of projects developed at the event. We plan to highlight 4-5 projects each day, with detailed posts from a handful of teams to follow. 🔗 checkers Summary: checkers is a framework for reviewing analysis projects. It provides automated checks for best practices, using extensions on the goodpractice package. caption In addition, checkers includes a descriptive guide for best practices....
Following up on Stefanie’s recap of unconf 17, we are following up this entire week with summaries of projects developed at the event. We plan to highlight 4-5 projects each day, with detailed posts from a handful of teams to follow. 🔗 skimr Summary: skimr, a package inspired by Hadley Wickham’s precis package, aims to provide summary statistics iteratively and interactively as part of a pipeline. The package provides easily skimmable summary statistics to help you better understand your data and see what is missing....
We held our 4th annual unconference in Los Angeles, May 25-26, 2017. Scientists, R-software users and developers, and open data enthusiasts from academia, industry, government, and non-profits came together for two days to hack on projects they dreamed up and to give our online community an opportunity to connect in-person. The result? 70 people from 7 countries on 3 continents proposed 69 ideas leading to 21 projects in 2 days, and one awesome community just upped its game!...
As all other types of visualization, linguistic mapping has two main goals: data presentation and data analysis. The most common purpose for which linguistic maps are used, is simply pointing to the location of one or more languages of interest (presentation). A more sophisticated task is showing the distribution of particular linguistic features or their combination among languages of a certain area (presentation and analysis). There are three linguistic subdisciplines that use maps for visualization: linguistic typology, areal linguistics and dialectology....
On 21-22 April, the London School of Economics hosted the Text Analysis Package Developers’ Workshop, a two-day event held in London that brought together developers of R packages for working with text and text-related data. This included a wide range of applications, including string handling (stringi) and tokenization (the rOpenSci-onboarded tokenizers, KoNLP), corpus and text processing (readtext, tm, quanteda, and qdap), natural language processing (NLP) such as part of speech and dependency tagging (cleanNLP, spacyr), and the statistical analysis of textual data (stm, text2vec, and koRpus) – although this list is hardly complete....