WWDC24 Week 5
I just checked, and I have exactly 21 videos left, so if I stick to three videos a day I'll finish up next Friday. As I have some vacation planned at the tail end of the week, I won't be doing 3 videos a day, but I'll see how ambitious I am and I either will double up on videos a few days, or take another week to finish this out.
Enhance ad experiences with HLS interstitials
Interstitials now support more integration into the primary timeline, and can be shared across a SharePlay session, so that things like recaps, previews, and other ad content can cleanly be used in interstitials on a stream.
Meet the next generation of CarPlay architecture
The new CarPlay system will include the ability for phone rendered elements, on car rendered elements, and passthrough elements like video feeds from the car to all co-exist and be combined by a compositor that keeps animations and such in sync. It looks interesting, and it will be very interesting to see what happens when cars with this tech ship.
Use HDR for dynamic image experiences in your app
This session goes into detail on how HDR images are structured, and the implications for those formats when working with the files in an image editor. HDR content is often saved as a SDR image plus extra data that describes the brightness of pixels outside of the standard SDR range.
Enhance the immersion of media viewing in custom environments
This session covers how to set up mounting points in a custom environment, and how to set up reflections, environment probes, and tints on the passthrough video for a custom environment.
Explore game input in visionOS
This session covers both gesture based and controller based game input on visionOS. Controller support is exactly the same as on the other operating systems, and the gesture support section covered both how to use system gestures, and work with hand tracking.
What’s new in privacy
This session is a high level overview of the changes in privacy, most of the content in it has dedicated sessions explaining the details of how to work with those changes. Getting an overview of the direction behind all of the privacy sessions was particularly informative, and it’s nice seeing that the focus is on both privacy and user friendliness.
What’s new in DockKit
DockKit adds improved tracking, support for buttons on devices, battery information, and the ability for an app to interact with the stream of focus interest data from the device. Most of this was done to add support for gimbal mounds, and to allow DockKit to be used for photography and panorama.
Keep colors consistent across captures
Having spent time doing product photography and attempting to get the color grading correct from capture to display, it’s absolutely fascinating hearing about how this was implemented. The system captures an image with and without flash, and uses how the known light source from the flash changes the brightness of pixels to correct the color on images.
What’s new in USD and MaterialX
This is a session on new features in the content creation ecosystem that can be used across Apple platforms. Of particular interest is some of the new tools being added for things like reducing the file size of USDZ files, better quicklook support, and some Apple shaders that can be used in MaterialX workflows.
Port advanced games to Apple platforms
This is an overview of all of the tools that Apple has made available for porting games to Apple platforms. Given that Apple was ambivalent at best about gaming for a long time, it’s nice to see they’ve both built up a toolchain for porting games to their platforms, and that they continue to add features showing the project is important.
Build compelling spatial photo and video experiences
This session walks through how to work with both the display and capture of spatial video and photos. It covers a range from simple things like how to display spatial video in a video player, to the math involved in using a dual camera setup for spatial video capture and how to make sure the image planes are properly aligned for a good 3d viewing experience.
Discover RealityKit APIs for iOS, macOS, and visionOS
This session walks through building a simple game, and handling things like physics, portals, and controls in a visionOS project, and then walks through how those features can all be enabled on other platforms.
What’s new in App Store Connect
The biggest change here is the ability to send notes on future feature releases to App Store editorial and they can then collaborate with you to feature them on launch. There’s also the ability to deep link into an app after the install via a custom project page, improvements to tester onboarding in TestFlight, and the ability to create marketing images.
Create enhanced spatial computing experiences with ARKit
This session covers some of the new features coming to ARKIit, the headline items are the ability to track a room, tracking angled planes, and improved hand tracking options.
Explore object tracking for visionOS`
This session walks through the process of setting up object tracking in visionOS, which starts with training a ML model to recognize a known object from a reference model, and then you can use that to find the object and track it. The reference model created for training also is useful to create an occlusion object so that content rendered by the OS can be blocked by the tracked object.
Explore machine learning on Apple platforms
This is a high level overview of all of the new machine learning features that came out this year. IT was interesting to see the focus on research and open source contributions alongside the regular SDK work.
Meet TabletopKit for visionOS
TabletopKit is a new API for creating either single player or multiplayer board gaming experiences on vsionOS. IT allows you to create a table, game objects and a game board,, and easily set up multiplayer games using SharePlay. It will be really interesting to see what developers do with it.
What’s new in Create ML
This session is an overview of new features in Create ML this year.The headline feature is object tracking, but there are also some nice improvements to the tools for visualizing your data sets to be able to make sure the data is good before using it to train a ML model.
Compose interactive 3D content in Reality Composer Pro
Reality Composer Pro keeps adding new features, this session covers how to work with animations, inverse kinematics, skeletal poses, and blend shapes. With these updates, it looks like the built in tooling for the system supports a lot of what used to be in highly specialized tools.
Enhance your spatial computing app with RealityKit audio
This session covers how to work with audio in RealityKit. It walks through adding sound effects to an object and controlling directionality and volume on those, how to set up sounds for collisions both in an environment, and with real world objects, and how to add music and ambient sounds to an environment.
Get started with HealthKit in visionOS
This session covers how to work with HealthKit in visionOS. The current APIs are available with feature parity to other platforms, so the only change is it has special handling for guest mode, and will not allow data to be saved or permissions to be changed when a guest is using the device.
Modeling Progress
At the end of fall last year, I had to stop work on a kit that was about 75% done because it got too cold to open the window to vent outside while painting with lacquers. So I dug that out and finished the paint work and did a quick pass of panel liners on it. I really can't say enough good things about the lacquers in terms of how thin they go on, and how durable they are. The shine on the high gloss finish is just great in person. At some point in the near future, I'll probably put together some sort of simple base for it to be able to pose it more dynamically.
After that I started in on one of the kits I picked up in Japan. I managed to finish cutting out and sanding all of the parts from the first few sprues, and will be priming those next week.