Atelier Clockwork

WWDC24 Week 6

I managed to wrap up the videos early so I don't need to think about this project during my vacation, which will be nice. Overall I feel like there were fewer sessions that were totally outside of my area of interest this year. Part of it may be because I've now done enough work with ML that the sessions there weren't completely inscrutable.

Now that I've done all the video watching, my next plan on deck is to get my WWDC session progress app into a state I'm happy with and can use so that it's ready to got for next year.

Bring your machine learning and AI models to Apple silicon

This session is entirely focused on bringing existing ML models to Apple Silicon, and covers the new features for quantizing models, as well as support for stateful models, and multifunction models.

Optimize for the spatial web

This session covers how web technologies can be used on visionOS. In particular is talks about how to use content ships to highlight interactive elements cleanly when the user looks at them, how to display spatial photos, and how to use WebSpeech, WebAudio and WebXR.

What’s new in Quick Look for visionOS

Quick Look in visionOS is interestingly different from other platforms as it’s tied into other applications and lets applications present content with it. This year is adds the ability to display a collection of objects, supports enough data that an application can keep track of if the preview for an item has been closed, and Quick Look supports configurations within a file on all platforms now, so that one asset can represent multiple colors or other configurations of a model.

Deploy machine learning and AI models on-device with Core ML

This session follows on from the technologies discussed in “Bring your machine learning and AI models to Apple silicon”, and shows how to deploy models with these features using CoreM, and how to measure performance.

Build a spatial drawing app with RealityKit

This session walks through a lot of the features for implementing a 3D drawing app, and covers a lot of useful low level details for working in RealityKit. In particular this covers how to work with different vertex buffer formats, low level mesh formats, and how to create a low level texture driven by a Metal shader.

Support real-time ML inference on the CPU

This is an in-depth dive into using BNNS Graph to execute ML model in a real time context, such as processing audio, and walks through importing the model, setting up the project, and implementing the BNNS graph from the audio.

Render Metal with passthrough in visionOS

This I a detailed talk about that it takes to create a renderer that reads the transforms for the user’s viewpoint and other trackable anchors, and then renders the scene correctly for the user. It also explains the different requirements in the passthrough mode compared to full immersion.

Build a great Lock Screen camera capture experience

Locked camera capture allows 3rd party cameras to be used while the device is locked, and this session explains how to build an extension to do this, what the limitations are when running in that extension, and how data can be sent from your extension to the photo library or your application.

Optimize your 3D assets for spatial computing

This session walks through the practical concerns for how to optimize a scene for rendering on visionOS. This includes texture optimizations, baking lighting, when to use different texture types, and splitting up geometry so that objects not in view can be culled.

Break into the RealityKit debugger

This session walks through using the RealityKit debugger to spot some common issues, and talks through what could have caused each issue, and how to address it.

Explore App Store server APIs for In-App Purchase

This session covers enhancements to the App Store server APIs, including the ability to receive notifications for all purchases, as well as fetching the full transaction history for a user. It also covers how to create various types of offers such as win back offers.

Build immersive web experiences with WebXR

This session covers the practical concerns for setting up a WebXR experience, talks about some available frameworks to make it easier to work with, and then goes into detail about how the different input types work.

Discover area mode for Object Capture

Area mode allows you to capture objects in contexts where the object can’t be easily isolated. Object capture also added the ability to export quad meshes with square polygons to make them easier to work with for 3d artists, higher quality export options, and easier ways to add extra data to object capture samples.

Design interactive experiences for visionOS

This session walks through some of the design choices from the Encounter Dinosaurs app, and primarily focuses on how to design interactions to help guide a user in immersive experiences.

Train your machine learning and AI models on Apple GPUs

This session is aimed at developers who are used to using python to train ML models, and want to take advantage of hardware acceleration on Apple Silicon. It covers the frameworks that have Apple Silicon support, and how to enable support for both model training and model export.

Bring your iOS or iPadOS game to visionOS

This session focuses on how to add visionOS specific features to an existing game, such as dynamic backgrounds, supporting parallax rendering, and head tracking.

What’s new in device management

Device management is now supported on visionOS, and there is a long list of new convenience features. The most important one is likely that managed devices can now have activation lock disabled by the MDM software.

Discover Swift enhancements in the Vision framework

The Vision framework now supports asynchronous/await, which will make integrating with Vision much cleaner than it was with the old completion handler based API. There are also some new types of requests, the most interesting one to me is assessing image aesthetics.

Implement App Store Offers

This session covers how to implement win back offers for subscriptions in detail, including configuring eligibility conditions, how to choose the best offer for a customer in app, handing off purchases of these offers from the App Store, and how to test them in Xcode.

Accelerate machine learning with Metal

This walks through some of the new optimizations available in Metal Performance Shaders, and the debugging tools. If I hadn’t spent a while doing ML work this year, I don’t think any of this would have made any sense to me.

Customize spatial Persona templates in SharePlay

This session walks through the things to consider when working with a spatial Persona template. It covers how to set up seating, how to reserve seats for particular roles, considerations for transitions, and affordances to be made for users on other platforms or who aren’t using Spatial Personas.

Modeling Progress

I primed, painted, and weathered the first few sprues in the kit, and did some experimenting to get a glow effect on the eyes rather than using the clear plastic part as-is and I think it looks good when viewing it from a distance.

I then got a second set of sprues primed and have the base coat in place on those, so I just have weathering left there.

Acerby, weatheredAcerby, face smallAcerby, face largeAcerby, based