Interesting Data-Related Blogs and Articles – Week of July 28, 2019

Special this week: Humble Bundle Data Analysis & Machine Learning ebooks from O’Reilly Media.

Donate at least $1 US and you’ll be able to download 5 ebooks. If you donate at least $15 US, you will get 15 ebooks! A sample of titles:

  • Graphing Data with R
  • Learning Apache Drill
  • Architecting Modern Data Platforms

AWS

Best practices for Amazon RDS PostgreSQL replication

Contains recommendations for monitoring, configuration parameters, and locking behavior on both the source and the replica instances.

Cloud Vendor Deep-Dive: PostgreSQL on AWS Aurora

An interesting neutral party overview of PostgreSQL on the Aurora implementation. Covers best practices, compatibility with “vanilla” PostgreSQL, monitoring and a host of other topics.

Orchestrate an ETL process using AWS Step Functions for Amazon Redshift

Describes the architecture and overview of a solution that relies on a combination of Step Functions, Lambda, and Batch for building an ETL workflow. There’s a link in the article to a CloudFormation template that will launch the needed infrastructure.


PostgreSQL

Combined Indexes vs. Separate Indexes In PostgreSQL

Excellent discussion on when to create composite indexes as opposed to indexes on single columns.

TRUNCATE vs DELETE: Efficiently Clearing Data from a Postgres Table

You might be surprised which method is faster (at least in this set of tests).

Parallelism in PostgreSQL

PostgreSQL has supported various parallel strategies since version 9.6 and an even broader set of parallel operations came with version 10. This article provides and overview of types of queries that can execute in-parallel.


R

Ten more random useful things in R you may not know about

It’s truly a random list, but a number of these look interesting and helpful.


Software Updates

Anaconda 2019.07

Midyear release of the popular Python environment.

Jupyter, PyCharm and Pizza

Last week I linked to the release notes for the latest version of PyCharm (2019.2), the popular Python IDE. This article shows how to use the Jupyter notebook integration in that release.


Practices and Architecture

Overview of Consistency Levels in Database Systems

“Isolation levels” is probably a more well-known concept, but as this article explains there are consistency levels (the C in ACID) for database systems as well. I recommend reading anything by Daniel Abadi.

Presto at Pinterest

Presto is a query engine for executing SQL on different data sources. The article above discusses challenges that Pinterest has encountered in using Presto for various uses and how it overcame those.


General Data-Related

Harvard Data Science Review

A new open-access journal sponsored by the Harvard Data Science Initiative.

Salesforce Completes Acquisition of Tableau

That didn’t take long. The acquisition was originally announced on June 10.

This Week in Neo4j – Exploration from Bloom Canvas, Building your first Graph App, Parallel k-Hop counts, Scala Cypher DSL

Technically, last week in Neo4j. A helpful summary of noteworthy posts from the Neo4j blog.


Upcoming Conferences of Interest

ApacheCon 2019

There’s one in Las Vegas (North America) in September and one in Berlin (Europe) in October.

DCMI 2019 Conference

The Dublin Core Metadata Initiative 2019 Annual Conference. Held this year in Seoul, South Korea.


Classic Paper or Reference of the Week

A Simple Guide to Five Normal Forms in Relational Database Theory

Everyone who designs relational databases ought to be familiar with the concepts in this article. There are higher normal forms (Boyce-Codd normal form anyone?) that have been proposed since Kent’s paper, but these five are the foundation.


Data Technology of the Week

Apache NiFi

“Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.” In other words, one can view NiFi as being an ETL tool, but it offers features like data provenance and back pressure that are not common in ETL tools and these, among functionalities, make NiFi more of a general-purpose data flow processor. NiFi was originally developed by the US National Security Agency (NSA). You can learn more about flow-based programming in this Wikipedia article. Pronounce the name “nye fye”.


Metadata Standard of the Week

Categories for the Description of Works of Art

Created and maintained by the J. Paul Getty Trust, the CDWA is a set of guidelines for describing art, architecture, and other works of culture. One implementation of the CDWA is the Cultural Objects Name Authority.

Interesting Data-Related Blogs and Articles – Week of July 21, 2019

AWS

AWS Tech Talk (July 31): How to Build Serverless Data Lake Analytics with Amazon Athena

Will discuss using AWS Athena for querying data in S3.

Migrate and deploy your Apache Hive metastore on Amazon EMR

Includes a discussion of using the Glue Catalog. I’ve experimented with Glue. The Glue Catalog might be the most useful part of Glue (in the current state of Glue development).


PostgreSQL

PostgreSQL wins 2019 O’Reilly Open Source Award for Lifetime Achievement

Last year’s winner was Linux, so PostgreSQL is in excellent company. This is only the second year that the award has been presented.


Python

pandas-profiling

This is a library for profiling a data set. I have been playing around with it and so far really like the functionality and simplicity of using pandas-profiling via, for example, a Jupyter Notebook.

Researchers love PyTorch and TensorFlow

From the O’Reilly AI channel. The finding mentioned in the headline comes from an analysis of papers posted on arXiv.org.

StanfordNLP 0.2.0 – Python NLP Library for Many Human Languages

“StanfordNLP is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, and to give a syntactic structure dependency parse, which is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism. In addition, it is able to call the CoreNLP Java package and inherits additonal functionality from there, such as constituency parsing, coreference resolution, and linguistic pattern matching.”


R

June 2019 “Top 40” R Packages

These cover Computational Methods, Data, Finance, Genomics, Machine Learning, Science and Medicine, Statistics, Time Series, Utilities, and Visualization.


Software Updates

DBeaver 6.1.3

Excerpts from the release notes (the link shows all changes):

“New project configuration format was implemented.

Major features:

  • Data viewer: “References” panel was added (browse values by foreign and reference keys)

Other:

  • Connection page was redesigned
  • PostgreSQL: struct/array data types support was fixed
  • MySQL: privileges viewer was fixed (global privileges grant/revoke)”

PyCharm 2019.2

With improved integration with Jupyter Notebook, among other improvements.


Practices and Architecture

Five principles that will keep your data warehouse organized

Some are obvious, some less so:

  • Use schemas to logically group together objects
  • Use consistent and meaningful names for objects in a warehouse
  • Use a separate user for each human being and application connecting to your data warehouse
  • Grant privileges systematically
  • Limit access to superuser privilegs

Graph Databases Go Mainstream

Given that this article is published in Forbes, it’s hard to argue with the headline. An interesting overview.

The Little Book of Python Anti-Patterns

“Welcome, fellow Pythoneer! This is a small book of Python anti-patterns and worst practices.”

General Data-Related

What We Learned From The 2018 Liquibase Community Survey

The creator of Liquibase shares information about who uses Liquibase.

You’re very easy to track down, even when your data has been anonymized

Scary stuff: turns out anonymizing data doesn’t protect you from being identified after all.


Podcasts

Acquiring and sharing high-quality data

July 18th episode of the O’Reilly Data Show Podcast.


Upcoming Conferences of Interest

NODES 2019

“Neo4j Online Developer Expo & Summit” Apparently, this is the first-ever such conference for the Neo4j community.


Classic Paper or Reference of the Week

The Design of Postgres

Written by Michael Stonebraker and Lawrence A. Rowe, describes the architecture of Postgres as a successor to INGRES. Of course, this is the jumping-off point for the PostgreSQL of today.


Data Technologies of the Week

I couldn’t pick just one.

Apache Iceberg

“Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to Presto and Spark that use a high-performance format that works just like a SQL table.” Still incubating, but sounds very cool.

Apache Avro

Avro is a data serialization format. “Avro provides functionality similar to systems such as Thrift, Protocol Buffers, etc. Avro differs from these systems in the following fundamental aspects.

  • Dynamic typing: Avro does not require that code be generated. Data is always accompanied by a schema that permits full processing of that data without code generation, static datatypes, etc. This facilitates construction of generic data-processing systems and languages.
  • Untagged data: Since the schema is present when data is read, considerably less type information need be encoded with data, resulting in smaller serialization size.
  • No manually-assigned field IDs: When a schema changes, both the old and new schema are always present when processing data, so differences may be resolved symbolically, using field names.”

Dask

“Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.”


Metadata Standard of the Week

BISAC Subject Headings (2018 Edition)

“The BISAC Subject Headings List, also known as the BISAC Subject Codes List, is a standard used by many companies throughout the supply chain to categorize books based on topical content. The Subject Heading applied to a book can determine where the work is shelved in a brick and mortar store or the genre(s) under which it can be searched for in an internal database.” The Book Industry Study Group (BISG) provides a helpful FAQ for deciding what BISAC to use for a book.

Interesting Data-Related Blogs and Articles – Week of July 7, 2019

 

AWS

Amazon Aurora PostgreSQL Serverless – Now Generally Available

Serverless Aurora MySQL has been around for awhile, but this is the first release of serverless Aurora for PostgreSQL.
In related news, the Aurora development team just won the 2019 ACM SIGMOD Systems Award.

How 3M Health Information Systems built a healthcare data reporting tool with Amazon Redshift

A case study of modernizing a legacy data warehouse on AWS, using Redshift, including lessons learned.

Improving Amazon Redshift Performance: Our Data Warehouse Story

From Udemy Engineering, a brief overview of how column stores like Redshift differ from traditional relational databases. The author discusses how to design a database to take advantage of Redhsift’s fundamental architecture.

Optimizing Amazon DynamoDB scan latency through schema design.

An overview of improving table scans by paying attention to your attributes.


PostgreSQL

EnterpriseDB Acquired by Great Hill Partners

EnterpriseDB staff make major contributions to the PostgreSQL code base.
In a related development, Michael Stonebraker, the original architect of what is now PostgreSQL, will serve as a technical adviser to the company.

Generated columns in PostgreSQL 12

A cool new feature in the next release of PostgreSQL.
“This feature is known in various other DBMS as ‘calculated columns’, ‘virtual columns’, or ‘generated columns’.”

How We Solved a Storage Problem in PostgreSQL Without Adding a Single Byte of Storage

Pretty clever idea: reduce the size of the key used in sorting by hashing it. Probably not specific to PostgreSQL.

Postgresql Interval, Date, Timestamp and Time Data Types

“Does anyone really know what time it is?”
A primer on all the various ways of representing time in PostgreSQL.


Software Updates

AWS RDS for PostgreSQL Supports New Minor Versions (2017-07-03)

PostgreSQL versions 11.4, 10.9, 9.6.14, 9.5.18, and 9.4.23 are now available for RDS.

DBeaver 6.1.2 (Released 2019-07-07)

pgAdmin 4.10 (Released 2019-07-04)


Practices and Architecture

Figuring out the future of distributed data systems

Summary of an interview with Martin Kleppmann, author of Designing Data-Intensive Applications, which is becoming an influential book in the field.

Spark core concepts explained

A brief primer with helpful graphics.


Classic Paper or Reference of the Week

The classic “Red Book” Readings in Database Systems is now in a fifth edition and exclusively on the Web. Peter Bailis, an up-and-coming light in the database community joins Joe Hellerstein and Michael Stonebraker as editors for this edition.

 

Book Review: “Big Data Glossary” by Pete Warden (O’Reilly Media)

Big Data Glossary” could probably have been titled  something like “Big Data Cheat Sheets” because it’s both more and less than a glossary.  Instead the book is an excellent summary of tools in the “big data” space, rather than a list of terms with definitions.

Warden tackles eleven topics:

  1. Some background on fundamental techniques (e.g., key-value stores)
  2. NoSQL databases
  3. MapReduce
  4. Storage techniques
  5. “Cloud” servers
  6. Data processing technologies (e.g., R and Lucene)
  7. Natural Language Processing
  8. Machine Learning
  9. Visualization
  10. Acquisition
  11. Serialization

He covers none of these topics in great detail, which will no doubt cause carping among some folks.  However, I really like his approach of sketching broad themes, identifying key projects (or products) in each space, and pointing the reader to further research.  Because the field of “big data” is so large, this short book (it’s only 50 pages) serves the extremely useful purpose of tying together the field by providing an overview.

Highly recommended for folks looking to get their feet wet in the great lake of big data.