opensource.google.com

Menu

Google hosts the Apache HBase community at HBaseCon West 2017

Friday, March 31, 2017

We’re excited to announce that Google will host and organize HBaseCon West 2017, the official conference for the Apache HBase community on June 12. Registration for the event in Mountain View, CA is free and the call for papers (CFP) is open through April 24. Seats are limited and the CFP closes soon, so act fast.


Apache HBase is the original open source implementation of the design concepts behind Bigtable, a critical piece of internal Google data infrastructure which was first described in a 2006 research paper and earned a SIGOPS Hall of Fame award last year. Since the founding of HBase, its community has made impressive advances supporting massive scale with enterprise users including Alibaba, Apple, Facebook, and Visa. The community is fostering a rich and still-growing ecosystem including Apache Phoenix, OpenTSDB, Apache Trafodion, Apache Kylin and many others.

Now that Bigtable is available to Google Cloud users through Google Cloud Bigtable, developers have the benefit of platform choices for apps that rely on high-volume and low-latency reads and writes. Without the ability to build portable applications on open APIs,  however, even that freedom of choice can lead to a dead end  something Google addresses through its investment in open standards like Apache Beam, Kubernetes and TensorFlow.

To that end, Google’s Bigtable team has been actively participating in the HBase community. We’ve helped co-author the HBase 1.0 API and have standardized on that API in Cloud Bigtable. This design choice means developers with HBase experience don’t need to learn a new API for building cloud-native applications, ensures Cloud Bigtable users have access to the large Apache Hadoop ecosystem and alleviates concerns about long-term lock-in.

We hope you’ll join us and the HBase community at HBaseCon West 2017. We recommend registering early as there is no registration available on site. As usual, sessions are selected by the HBase community from a pool reflecting some of the world’s largest and most advanced production deployments.

Register soon or submit a paper for HBaseCon  remember, the CFP closes on April 24! We look forward to seeing you at the conference.

By Carter Page and Michael Stack, Apache HBase Project Management Committee members

A New Home for Google Open Source

Tuesday, March 28, 2017

Google Open Source logo
Free and open source software has been part of our technical and organizational foundation since Google’s early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team's code, open source is part of everything we do. In return, we've released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.

Today, we’re launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.

This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.

Helping you find interesting open source

One of the tenets of our philosophy towards releasing open source code is that "more is better." We don't know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancer and Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.

To provide a more complete picture, we are launching a directory of our open source projects which we will expand over time. For many of these projects we are also adding information about how they are used inside Google. In the future, we hope to add more information about project lifecycle and maturity.

How we do open source

Open source is about more than just code; it's also about community and process. Participating in open source projects and communities as a large corporation comes with its own unique set of challenges. In 2014, we helped form the TODO Group, which provides a forum to collaborate and share best practices among companies that are deeply committed to open source. Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.

These docs explain the process we follow for releasing new open source projects, submitting patches to others' projects, and how we manage the open source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses or why we require contributor license agreements for all patches we receive.

Our policies and procedures are informed by many years of experience and lessons we've learned along the way. We know that our particular approach to open source might not be right for everyone—there's more than one way to do open source—and so these docs should not be read as a "how-to" guide. Similar to how it can be valuable to read another engineer's source code to see how they solved a problem, we hope that others find value in seeing how we approach and think about open source at Google.

To hear a little more about the backstory of the new Google Open Source site, we invite you to listen to the latest episode from our friends at The Changelog. We hope you enjoy exploring the new site!

By Will Norris, Open Source Programs Office

The latest round of Google Open Source Peer Bonus winners

Monday, March 27, 2017

Google relies on open source software throughout our systems, much of it written by non-Googlers. We’re always looking for ways to say “thank you!” so 5 years ago we started asking Googlers to nominate open source contributors outside of the company who have made significant contributions to codebases we use or think are important. We’ve recognized more than 500 developers from 30+ countries who have contributed their time and talent to over 400 open source projects since the program’s inception in 2011.

Today we are pleased to announce the latest round of awardees, 52 individuals we’d like to recognize for their dedication to open source communities. The following is a list of everyone who gave us permission to thank them publicly:


Name Project Name Project
Philipp Hancke Adapter.js Fernando Perez Jupyter & IPython
Geoff Greer Ag Michelle Noorali Kubernetes & Helm
Dzmitry Shylovich Angular Prosper Otemuyiwa Laravel Hackathon Starter
David Kalnischkies Apt Keith Busch Linux kernel
Peter Mounce Bazel Thomas Caswell matplotlib
Yuki Yugui Sonoda Bazel Tatsuhiro Tsujikawa nghttp2
Eric Fiselier benchmark Anna Henningsen Node.js
Rob Stradling Certificate Transparency Charles Harris NumPy
Ke He Chromium Jeff Reback pandas
Daniel Micay CopperheadOS Ludovic Rousseau PCSC-Lite, CCID
Nico Huber coreboot Matti Picus PyPy
Kyösti Mälkki coreboot Salvatore Sanfilippo Redis
Jana Moudrá Dart Ralf Gommers SciPy
John Wiegley Emacs Kevin O'Connor SeaBIOS
Alex Saveau FirebaseUI-Android Sam Aaron Sonic Pi
Toke Hoiland-Jorgensen Flent Michael Tyson The Amazing Audio Engine
Hanno Böck Fuzzing Project Rob Landley Toybox
Luca Milanesio Gerrit Bin Meng U-Boot
Daniel Theophanes Go programming language Ben Noordhuis V8
Josh Snyder Go programming language Fatih Arslan vim-go
Brendan Tracey Go programming language Adam Treat WebKit
Elias Naur Go on Mobile Chris Dumez WebKit
Anthonios Partheniou Google Cloud Datalab Sean Larkin Webpack
Marcus Meissner gPhoto2 Tobias Koppers Webpack
Matt Butcher Helm Alexis La Goutte Wireshark dissector for QUIC

Congratulations to all of the awardees, past and present! Thank you for your contributions.

By Helen Hu, Open Source Programs Office

Dispatches from the latest Mercurial sprints

Friday, March 24, 2017

On March 10th-12th, the Mercurial project held one of its twice-a-year sprints in the Google Mountain View office. Mercurial is a distributed version control system, used by Google, W3C, OpenJDK and Mozilla among others. We had 40 developers in attendance, some from companies with large Mercurial deployments and some individual contributors who volunteer in their spare time.

One of the major themes we discussed was user-friendliness. Mercurial developers work hard to keep the command-line interface backwards compatible, but at the same time, we would like to make progress by smoothing out some rough edges. We discussed how we can provide a better user interface for users to opt-in to without breaking the backwards compatibility constraint. We also talked about how to make Mercurial’s Changeset Evolution feature easier to use.

We considered moving Mercurial past SHA1 for revision identification, to enhance security and integrity of Mercurial repositories in light of recent SHA1 exploits. A rough consensus on a plan started to emerge, and design docs should start to circulate in the next month or so.

We also talked about performance, such as new storage layers that would scale more effectively and work better with clones that only contain a partial repository history, a key requirement for Mercurial adoption in enterprise environments with large repositories, like Google.

If you are interested in finding out more about Mercurial (or perhaps you’d like to contribute!) you can find our mailing list information here.

By Martin von Zweigbergk and Augie Fackler, Software Engineers

An Upgrade to SyntaxNet, New Models and a Parsing Competition

Wednesday, March 22, 2017

Crossposted from the Google Research Blog

At Google, we continuously improve the language understanding capabilities used in applications ranging from generation of email responses to translation. Last summer, we open-sourced SyntaxNet, a neural-network framework for analyzing and understanding the grammatical structure of sentences. Included in our release was Parsey McParseface, a state-of-the-art model that we had trained for analyzing English, followed quickly by a collection of pre-trained models for 40 additional languages, which we dubbed Parsey's Cousins. While we were excited to share our research and to provide these resources to the broader community, building machine learning systems that work well for languages other than English remains an ongoing challenge. We are excited to announce a few new research resources, available now, that address this problem.

SyntaxNet Upgrade
We are releasing a major upgrade to SyntaxNet. This upgrade incorporates nearly a year’s worth of our research on multilingual language understanding, and is available to anyone interested in building systems for processing and understanding text. At the core of the upgrade is a new technology that enables learning of richly layered representations of input sentences. More specifically, the upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’). By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’). Parsey and Parsey’s Cousins, on the other hand, operated over sequences of words. As a result, they were forced to memorize words seen during training and relied mostly on the context to determine the grammatical function of previously unseen words.

As an example, consider the following (meaningless but grammatically correct) sentence:
This sentence was originally coined by Andrew Ingraham who explained: “You do not know what this means; nor do I. But if we assume that it is English, we know that the doshes are distimmed by the gostak. We know too that one distimmer of doshes is a gostak." Systematic patterns in morphology and syntax allow us to guess the grammatical function of words even when they are completely novel: we understand that ‘doshes’ is the plural of the noun ‘dosh’ (similar to the ‘cats’ example above) or that ‘distim’ is the third person singular of the verb distim. Based on this analysis we can then derive the overall structure of this sentence even though we have never seen the words before.

ParseySaurus
To showcase the new capabilities provided by our upgrade to SyntaxNet, we are releasing a set of new pretrained models called ParseySaurus. These models use the character-based input representation mentioned above and are thus much better at predicting the meaning of new words based both on their spelling and how they are used in context. The ParseySaurus models are far more accurate than Parsey’s Cousins (reducing errors by as much as 25%), particularly for morphologically-rich languages like Russian, or agglutinative languages like Turkish and Hungarian. In those languages there can be dozens of forms for each word and many of these forms might never be observed during training - even in a very large corpus.

Consider the following fictitious Russian sentence, where again the stems are meaningless, but the suffixes define an unambiguous interpretation of the sentence structure:
Even though our Russian ParseySaurus model has never seen these words, it can correctly analyze the sentence by inspecting the character sequences which constitute each word. In doing so, the system can determine many properties of the words (notice how many more morphological features there are here than in the English example). To see the sentence as ParseySaurus does, here is a visualization of how the model analyzes this sentence:
Each square represents one node in the neural network graph, and lines show the connections between them. The left-side “tail” of the graph shows the model consuming the input as one long string of characters. These are intermittently passed to the right side, where the rich web of connections shows the model composing words into phrases and producing a syntactic parse. Check out the full-size rendering here.

A Competition
You might be wondering whether character-based modeling are all we need or whether there are other techniques that might be important. SyntaxNet has lots more to offer, like beam search and different training objectives, but there are of course also many other possibilities. To find out what works well in practice we are helping co-organize, together with Charles University and other colleagues, a multilingual parsing competition at this year’s Conference on Computational Natural Language Learning (CoNLL) with the goal of building syntactic parsing systems that work well in real-world settings and for 45 different languages.

The competition is made possible by the Universal Dependencies (UD) initiative, whose goal is to develop cross-linguistically consistent treebanks. Because machine learned models can only be as good as the data that they have access to, we have been contributing data to UD since 2013. For the competition, we partnered with UD and DFKI to build a new multilingual evaluation set consisting of 1000 sentences that have been translated into 20+ different languages and annotated by linguists with parse trees. This evaluation set is the first of its kind (in the past, each language had its own independent evaluation set) and will enable more consistent cross-lingual comparisons. Because the sentences have the same meaning and have been annotated according to the same guidelines, we will be able to get closer to answering the question of which languages might be harder to parse.

We hope that the upgraded SyntaxNet framework and our the pre-trained ParseySaurus models will inspire researchers to participate in the competition. We have additionally created a tutorial showing how to load a Docker image and train models on the Google Cloud Platform, to facilitate participation by smaller teams with limited resources. So, if you have an idea for making your own models with the SyntaxNet framework, sign up to compete! We believe that the configurations that we are releasing are a good place to start, but we look forward to seeing how participants will be able to extend and improve these models or perhaps create better ones!

Thanks to everyone involved who made this competition happen, including our collaborators at UD-Pipe, who provide another baseline implementation to make it easy to enter the competition. Happy parsing from the main developers, Chris Alberti, Daniel Andor, Ivan Bogatyy, Mark Omernick, Zora Tung and Ji Ma!

By David Weiss and Slav Petrov, Research Scientists
.