Google announced this week that two of its projects are going open source. Code for both DeepLab-V3+, the latest version of Google’s semantic image segmentation AI model, and Resonance Audio, Google’s spatial audio SDK, is now freely available.

Semantic image segmentation is a process by which computers recognize and assign natural-language names to different objects in a photo or video—Google Photos being able to not only see your dog in a picture but also identify it as a “dog” (versus “cat” or “marmot”) is the result of such a process. In a blog post, Google mentions the Pixel 2’s single-lens portrait mode as being a feature “this type of technology can enable,” but notes that DeepLab-V3+ itself isn’t responsible for that bit of technological magic.

Resonance Audio “enables developers to create more realistic VR and AR experiences on mobile and desktop,” Google says, and has been used in the development of apps like Star Wars: Jedi Challenges. The SDK released last year, but was only made open source as of Wednesday. In a nutshell, Resonance Audio uses positional data and audio filters to make different sounds in an augmented or virtual reality experience seem like they’re coming from appropriate positions around the user.

You can check out the code for both DeepLab-V3+ and Resonance Audio at GitHub.