AtelierClockwork

Week 6 Progress

Progress report: 177 of 177 videos watched and summarized. So that means I’ve managed to watch all of the session videos and now have a passable index of ideas introduced at WWDC this year. So now that I’ve finished the ”watch all the things” task I set out for myself, I need to figure out what to do with all of the information, aside from the obvious part of using it in my day job, and also see if now that I’ve managed to get some momentum going if I can stick to the blogging thing again.

Discover Metal for immersive apps

Enhance your spatial computing app with RealityKit

Create immersive Unity apps

Bring your Unity VR app to a fully immersive space

These sessions all focused on developing games in visionOS, and other low level implementation topics for the OS. It’s promising to see that Unity has lots of integrations for visionOS close to ready, and that it’s using things like MaterialX to bridge the unity Materials system to visionOS.

Customize on-device speech recognition

This session explained how to train the on device speech recognizer for both user specific and domain specific content. It explains how to weight terms, and how to do things like use phonetic annotations to ensure that the words are parsed properly.

Explore 3D body pose and person segmentation in Vision

Detect animal poses in Vision

These sessions show off the new enhancements to Vision. Having support for 3d pose detection for people adds interesting potential for modeling and animation, and the new person segmentation should allow lots of interesting new effects.

Explore Natural Language multilingual models

Discover machine learning enhancements in Create ML

Optimize machine learning for Metal apps

Improve Core ML integration with async prediction

Use Core ML Tools for machine learning model compression

These sessions explained all of the new features in CoreML and the related tooling that are coming to the new OSs. This includes support for multilingual models, support for LLMs, improved model compression options, and the ability to use async predictions to efficiently handle many prediction requests.

Integrate with motorized iPhone stands using DockKit

This session showed off what you can do with DockKit. This is a system level toolkit for interacting with compatible motorized iPhone docs. It offers both person and point of interest tracking.

What’s new in privacy

This session is an overview of all of the new privacy technologies coming in the new OSs. As I watched this late, there wasn’t anything of interest that wasn’t in another session video,

Protect your Mac app with environment constraints

This session explains how to add environment constraints to Mac apps which allow for more control over the allowed execution environment. In particular, it allows for developers to do things like only allow helper tools for the application to be launched by the app itself to avoid parent processes being used to alter execution behavior.

Support HDR images in your app

This session explains the new tooling that’s coming to the OS for working with HDR images. Image views now support setting the displayed content to SDR, limited HDR or full HDR. Limited HDR is meant to be used in situations where there is mixed SDR and HDR content, or where HDR content would be distracting.

Support external cameras in your iPadOS app

This session explains how to use wired external camera on an iPadOS app, and how to handle the fact that device rotation and camera rotation can now be independent of each other.

What’s new in CSS

Explore media formats for the web

These sessions show off the work that the WebKit team has been doing to support emerging web standards. It’s interesting seeing successor formats to JPG, PNG and GIF coming after all of there years.

What’s new in web apps

This session demonstrates the new features that are available in web apps. The most important feature is that you can now create web apps from Safari on macOS. And there are new configuration options to help site maintainers scope and customize what a web app will look like.

What’s new in Safari extensions

This session explains all of the new features coming to Safari extensions. Among other things it includes more CSS selectors for content blockers, and new permissions for redirecting requests or modifying headers.

Meet Object Capture for iOS

This session shows the new ObjectCaptureView on iOS. It’s a SwiftUI view, and uses a state machine to manage state, which is very interesting as an architecture note. It supports capturing LIDAR data alongside photos, and can either generate the 3d model on device, or has tooling to export the images for processing on a Mac in Reality Composer Pro.

Explore enhancements to RoomPlan

This session has details on improvements to RoomPlan. The marquee feature is multi-room scans, but it also has support for more object types, more complex rooms, and exporting metadata along with the USDZ file.

Discover Quick Look for spatial computing

This session shows off Quick Look into visionOS. It has full support of the quicklook features of other platforms, and also the ability to drag content out of you app into a windowed quicklook view.

Create 3D models for Quick Look spatial experiences

This session covers details on using 3d models in Quick Look on visionOS, and includes lots of details about how to optimize materials, textures, and models to keep the model performant when being used.

Work with Reality Composer Pro content in Xcode

This session explains how to work with content from Reality Composer pro, and explains both how to load the content into the view, and also how the data in the Entity Component System is structured and how to work with them from Swift.

Explore rendering for spatial computing

This session covers details on how to optimize content for rendering in visionOS. In particular, it covers optimizing models and layer based content to better handle the foveated rendering.

Build spatial SharePlay experiences

This session explains how SharePlay works in visionOS, how the scenes are set up so that people can collaborate around different elements, and how to control which windows are shared in the SharePlay session.

Meet ARKit for spatial computing

This session summarizes the API updates to ARKit that were added for support in visionOS. This includes world tracking, hand tracking, and scene reconstruction.

Get started with building apps for spatial computing

This session is a kick off guide showing how to work with spatial computing shows off the variety of sample projects Apple has provided and shows how tog et started.