I spent several hours today (that I will never get back) attempting to force a view with a 3D transform to render properly into a UIImage as a favor for Aaron B. Long story short? Failed.
My most promising approach was this, where I attempted to read in an image pixel-by-pixel. Although it worked for the gross dimensions, the fine details were not properly read due to the z-axis rotation:
CGPoint p = [view.layer.superlayer convertPoint:refPoint toLayer:view.layer];
In the end, the lesson is this: 1. Don’t get distracted by Quartz stuff when you’re supposed to be writing a Quartz book. 2. Listen to Apple when it says, “Layers that use 3D transforms are not rendered”
On the bright side, I have marching ants working beautifully — integrated with CADisplayLink and a time interval you specify: Marching Ants
3 Comments
UIGetScreenImage() was the only way I ever found to get an image of 3D rendered text. I spent whole days trying to find a way when I was working on Crawl Creator. (Originally it was going to output .gifs instead of full on .mp4 with audio)
Cause, what’s a Star Wars crawl without it being in 3D?
An idea just occurred to me though, what about doing it this way:
• Iterate through subviews
• Query for the transform for each subview
• Render in context to produce an image for each subview
• Using either GPUImage or CoreImage, apply the queried transform to the image rendered.
• Now hide said subview.
• Continue on your merry way, tracking each subview image, and position in it’s superview.
• Finally, combine all of these rendered and transformed UIImages into one final UIImage.
I’m going to try this out a bit later tonight I think.
Actually, GPUImage awesomely supports transforming a UIImage with a CATransform3D matrix, where as CoreImage wants CIVectors and the like, so I’m going to try to add this to the GPUImage project. I’ll keep you updated.
I’m reallllllly close. But for some reason the CIPerspectiveTransform filter’s extent is causing a crop on the conversion to a UIImage.