If one gesture recognizer is fun, then two must make a party. This time, you’re going to add a pinch/zoom gesture that will resize your shape view. As before, start by creating and attaching a second gesture recognizer object at the end of the -addShape: method (CYViewController.m):
UIPinchGestureRecognizer *pinchGesture;
pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(resizeShape:)];
[shapeView addGestureRecognizer:pinchGesture];
The pinch gesture recognizer object doesn’t need any configuration because a pinch/zoom is always a two-finger gesture. At the top of the file add a prototype for the new action method in the private
@interface SYViewController () section:
- (IBAction)resizeShape:(UIPinchGestureRecognizer*)gesture;
Finally, add the method to the @implementation section:
- (IBAction)resizeShape:(UIPinchGestureRecognizer*)gesture
{
SYShapeView *shapeView = (SYShapeView*)gesture.view; CGFloat pinchScale = gesture.scale;
CGAffineTransform zoom;
switch (gesture.state) {
case UIGestureRecognizerStateBegan: case UIGestureRecognizerStateChanged:
zoom = CGAffineTransformMakeScale(pinchScale,pinchScale); shapeView.transform = zoom;
break;
case UIGestureRecognizerStateEnded:
shapeView.transform = CGAffineTransformIdentity; CGRect frame = shapeView.frame;
CGFloat xDelta = frame.size.width*pinchScale-frame.size.width; CGFloat yDelta = frame.size.height*pinchScale-frame.size.height; frame.size.width += xDelta;
frame.size.height += yDelta; frame.origin.x -= xDelta/2;
frame.origin.y -= yDelta/2; shapeView.frame = frame; [shapeView setNeedsDisplay]; break;
default:
shapeView.transform = CGAffineTransformIdentity; break;
}
}
This method follows the same pattern as -moveShape:. The only significant difference is in the code to adjust the view’s final size and position, which requires a little more math than the drag method.
Run the project and try it out. Create a shape and then use two fingers to resize it, as shown on the left in Figure 11-11.
Figure 11-11. Resizing using a transform
You’ll notice that when you zoom the shape out a lot, its image gets the “jaggies:” aliasing artifacts caused by magnifying the smaller image. The reason is because you’re not resizing the view during the pinch gesture. You’re just applyinga transform to the original view’s image. Bézier paths are resolution independent, and draw smoothly at any size. But a transform only has the pixels of the view’s current image to work with. At the end of the pinch gesture, the shape view’ssize is adjusted and redrawn. This creates a new Bézier path, at the new size, and all is smooth again, as shown on the right in Figure 11-11.
Your app is looking pretty lively, but I think it could stand to be jazzed up a bit. What do you think about adding some animation?
Animation: It’s Not Just for Manga
Animation has become an integral, and expected, feature of modern apps. Without it, your app looks dull and uninteresting; even if it’s doing everything you intended it to. Fortunately for you, the designers of iOS know this andthey’ve done a staggering amount of work, all so you can easily add animation to your app. There are four ways to add movement to your app:
n The built-in stuff
n DIY
n Core Animation
n OpenGL
The “built-in stuff” are those places in the iOS API where animation will be done for you. Countless methods, from view controllers to table views, include a Boolean animated parameter. If you want your view controller to slide over,your page to peel up, your toolbar buttons to resize smoothly, your table view rows to spritely leap to their new positions, or your progress indicator to drift gently to its new value, all you have to do is pass YES for the animatedparameter and the iOS classes will do all of the work. So keep an eye out for those animated parameters, and use them.
In the do-it-yourself (DIY) animation solution, your code performs the frame-by-frame changes needed to animate your interface. This usually involves steps like this:
1. Create a timer that fires 30 times/second.
2. When the timer fires, update the position/look/size/content of a view.
3. Mark the view as needing to be redrawn.
4. Repeat steps 2 and 3 until the animation ends.
The DIY solution is, ironically, the method most often abused by amateurs. It might work OK in a handful of situations, but most often it suffers from a number of unavoidable performance pitfalls. The biggest problem is timing. It’s reallydifficult to balance the speed of an animation so it looks smooth, but doesn’t run so fast that it wastes CPU resources, battery life, and drags the rest of your app and the iOS system down with it.
Using Core Animation
Smart iOS developers—that’s you, since you’re reading this book—use Core Animation. Core Animation has solved all of the thorny performance, load-balancing, background-threading, and efficiency problems for you. Allyou have to do is tell it what you want animated and let it work its magic.
Animated content is drawn in a layer (CALayer) object. A layer object is just like a UIView; it’s a canvas that you draw into using Core Graphics. Once drawn, the layer can be animated using Core Animation. In a nutshell, youtell Core Animation how you want the layer changed (moved, shrunk, spun, curled, flipped, and so on), over what time period, and how fast. You then forget about it and let Core Animation do all of the work. Core Animation doesn’t even bother your app’s event loop;
it works quietly in the background, balancing the animation work with available CPU resources so it doesn’t interfere with whatever else your app needs to do. It’s really a remarkable system.
Keep in mind that Core Animation doesn’t change the contents of the layer object. It temporarily animates a copy of the layer, which disappears when the animation is over. I like to think of Core Animation as “live”transforms; it temporarily projects a distorted, animated, version of your layer, but never changes the layer.
Oh, did I say “a layer object is just like a UIView?” I should have said, “a layer object, like the one
in UIView” because UIView is based on Core Animation layers. When you’re drawing your view in
-drawRect:, you’re drawing into a CALayer object. You can get your UIView’s layer object through the layer property, should you ever need to work with the layer object directly. The take-away lesson is this: all UIView objects canbe animated using Core Animation. Now you’re cooking with gas!
Adding Animation to Shapely
There are three ways to get Core Animation working for you. I already described the first: all of those “built-in” animated parameters are based on Core Animation—no surprise. The second, traditional, Core Animation technique is to create an animation (CAAnimation) object. An animation object controls an animation sequence. It determines when it starts, stops, the speed of the animation (called the animation curve), what the animation does, if itrepeats, how many times, and so on. There are subclasses of CAAnimation that will animate a particular property of a view or animate
a transition (the adding, removal, or exchange of view objects). There’s even an animation class
(CAAnimationGroup) that synchronizes multiple animation objects.
Honestly, creating CAAnimation objects isn’t easy. Because it can be so convoluted, there are a ton of convenience constructors and methods that try to make it as painless as possible—but it’s still
a hard row to hoe. You have to define the beginning and ending property values of what’s being animated. You have to define timing and animation curves, then you have to start the animation and change the actual propertyvalues at the appropriate time. Remember that animation doesn’t change the original view, so if you want a view to slide from the left to right, you have to create an animation
that starts on the left and ends on the right, and then you have to set the position of the original view to the right, or the view will reappear on the left when the animation is over. It’s tedious.
Fortunately, the iOS gods have felt your pain and created a really simple way of creating basic animations called the block-based animation methods. These UIView methods let you write a few lines of code to tell Core Animationhow you want the properties of your view changed. Core Animation
then handles the work of creating, configuring, and starting the CAAnimation object(s). It even updates your view’s properties so, when the animation is over, your properties will be at the end value of the animation—which is exactlywhat you want.
So how simple are these block-based animation methods to use? You be the judge. Find your
-addShape: method in SYViewController.m file. At the end of the method and add this code:
shapeFrame = shapeView.frame;
CGRect buttonFrame = ((UIView*)sender).frame; shapeView.frame = buttonFrame;
[UIView animateWithDuration:0.5 delay:0
options:UIViewAnimationOptionCurveEaseOut animations:^{ shapeView.frame = shapeFrame; } completion:nil];
The new code starts by getting the updated frame of the new shape view. Remember that its frame was adjusted when its center property was placed at a random position on the screen. This is the location you want the view toend up at.
The second line of code gets the frame of the button that’s creating the new shape and the third line repositions your new shape view (again) so it is right on top of, and the same size as, the button. If you stopped here, your shape view would appear right on top of the button you tapped, covering it.
The last statement is the magic. It starts an animation that will last ½ second (duration:0.5), it starts immediately (delay:0), and uses an “ease out” animation curve (options:UIViewAnimationOptionCur veEaseOut). There are fourcanned curves to choose from: ease out (think of a plane landing), ease in (plane taking off), ease in-out (take off and landing), and linear (plane in flight at constant speed).
The method has two code block parameters. The first is the block that describes what you want animated, and by “describe” I mean you just write the code to set the properties that you want to change smoothly. UIView willautomatically animate any of these seven properties:
n frame
n bounds
n center
n transform
n alpha
n backgroundColor
n contentStretch
If you want a view to move or change size, animate its center or frame. Want it to fade away? Animate its alpha property from 1.0 to 0.0. Want it to smoothly turn to the right? Animate its transform from the identity transform to arotated transform. You can do any of these, or even multiple ones (changing the alpha and center) at the same time. It’s that easy.
The completion parameter is another code block that is executed when the animation ends. In Shapely, there’s nothing else to do, since your only goal was to move the view from buttonFrame to shapeFrame.
If there was, just pass a code block that does any post-animation chores. You can even start
another animation!
Run your app again and create a few shapes. Pretty cool, huh? (Again, no figure.) As you tap each add shape button, the new shape flies into your view, right from underneath your finger, like some crazy arcade game. If you’refast, you can get several going at the same time. And all it cost you was four lines of code.
What if you want to animate something other than these seven properties, create animations that run in loop, move in an arc, or run backwards? For that, you’ll need to dig into Core Animation and create your own animation objects; I’ll show you how in Chapter 14. You can read about it in the Core Animation Programming Guide you’ll find in Xcode’s Documentation and API Reference.
OpenGL
Oops, I almost forgot about the last animation technology: OpenGL. OpenGL is short for Open Graphics Library. It’s a cross-language, multi-platform, API for 2D and 3D animation. The flavor
of OpenGL included in iOS is OpenGL ES (OpenGL for Embedded Systems). It’s a trimmed down version of OpenGL suitable for running on very small computer systems, like iOS devices.
To be blunt, OpenGL is another world. An OpenGL view is programmed using a special C-like computer language called GLSL (the OpenGL Shading Language). To use it, you write vertex and fragment shader programs. These tiny littleprograms run in your device’s GPU (Graphics Processing Unit), as opposed
to the kind of code you have been writing, which runs in your CPU (Central Processing Unit). A GPU is a massively paralleled processer that might be running a hundred copies of your shader program simultaneously, each onecalculating the value of a different pixel.
The results can be nothing less than stunning. If you’ve ever run a 3D flight simulator, shoot-em-up, or adventure game, you were probably looking at an OpenGL view. Even 2D games with swirling clouds, stars, or any number ofspecial effects are written using OpenGL.
If you want to tap the full power of your device’s graphic processing unit, OpenGL is the way to go—but you’ve got a lot to learn. You’ll need a good book on OpenGL. Yes, there are whole books,
thicker than this one, just on OpenGL. Your content appears in a special Core Animation layer (CAEAGLLayer) object, specifically designed to display an OpenGL context. To add this to your app, create a GLKView (OpenGLKit View) object in your interface. GLKView is a subclass of UIView that hosts a CAEAGLLayer object. If you need one, there’s also a handy GLKViewController class.
Needless to say, I won’t be showing you any OpenGL examples in this book. (There’s an OpenGL Game Xcode project template if you’re dying to take a peek.) If that’s the kind of power you want to harness for your app, at least youknow what direction to go in. Start with the OpenGL ES Programming Guide for iOS that you’ll find in Xcode’s Documentation and API Reference. But be warned, you’d need to learn a lot of OpenGL fundamentals before much of thatdocument will make any sense.
The Order of Things
While you still have the Shapely project open, I want you to play around with view object order a little bit. Subviews have a specific order, called their Z-order. It determines how overlapping views are drawn. It’s not rocket science. The back view draws first, and subsequent views draw on top of it
(if they overlap). If the overlapping view is opaque, it obscures the view(s) behind it. If portions of it are transparent, the views behind it “peek” through holes.
This is easier to see than explain, so add two more gesture recognizers to Shapely. Once again, go back to the -addShape: action method in SYViewController.m. Immediately after the code that attaches the other two gesturerecognizers (before the animation code you just added), insert this:
UITapGestureRecognizer *dblTapGesture;
dblTapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(changeColor:)];
dblTapGesture.numberOfTapsRequired = 2; [shapeView addGestureRecognizer:dblTapGesture];
UITapGestureRecognizer *trplTapGesture;
trplTapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(sendShapeToBack:)];
trplTapGesture.numberOfTapsRequired = 3; [shapeView addGestureRecognizer:trplTapGesture];
This code adds double-tap and triple-tap gesture recognizers, which send a -changeColor: and
-sendShapeToBack: message, respectively. Scroll up to the @interface SYViewController () private interface section and declare the new methods:
- (IBAction)changeColor:(UITapGestureRecognizer*)gesture;
- (IBAction)sendShapeToBack:(UITapGestureRecognizer*)gesture;
Now add the two new methods to the @implementation section:
- (IBAction)changeColor:(UITapGestureRecognizer*)gesture
{
SYShapeView *shapeView = (SYShapeView*)gesture.view; UIColor *color = shapeView.color;
NSUInteger colorIndex = [self.colors indexOfObject:color];
NSUInteger newIndex; do {
newIndex = arc4random_uniform(self.colors.count);
} while (colorIndex==newIndex);
shapeView.color = [self.colors objectAtIndex:newIndex];
}
- (IBAction)sendShapeToBack:(UITapGestureRecognizer*)gesture
{
UIView *shapeView = gesture.view; [self.view sendSubviewToBack:shapeView];
}
The -changeColor: method is mostly for fun. It determines which color the shape is and picks a new color for it at random.
The -sendShapeToBack: action illustrates how views overlap. When you add a subview to a view (using UIView’s -addSubview: message) the new view goes on top. But that’s not your only choice. If view order is important, thereare a number of methods that will insert a subview at a specific
index, or immediately below or above another (known) view. You can also adjust the order of existing views using the -bringSubviewToFront: and -sendSubviewToBack:, which you’ll use here. Your triple-tap gesture will “push” thatsubview to the back, behind all of the other shapes.
To make this effect more obvious, make a minor alteration to your -drawRect: method in
SYShapeView.m, by inserting the two lines of code in bold:
- (void)drawRect:(CGRect)rect
{
UIBezierPath *path = self.path;
[[[UIColor blackColor] colorWithAlphaComponent:0.3] setFill]; [path fill];
[self.color setStroke]; [path stroke];
}
The new code fills the shape with black that’s 30% opaque (70% transparent). It will appear that
your shapes have a “smoky” middle that darkens any shapes that are drawn behind it. This will make it easy to see how shapes are overlapping.
Run your app, create a few shapes, resize them, and then move them so they overlap, as shown in Figure 11-12.
Figure 11-12. Overlapping shapes with semi-transparent fill
The shapes you added last are “on top” of the shapes you added first. Now try double-tapping a shape to change its color. I’ll wait.
I’m still waiting.
Is something wrong? Double-tapping doesn’t seem to be changing the color of a shape? There are two probable reasons: the -changeColor: method isn’t being received (you could test that by setting a breakpoint in Xcode), or itis being received and the color change isn’t showing up (which you can test by resizing the shape). If you double-tap a shape and then resize it, you’ll see the color change. OK, it’s the latter. Take a moment to fix this.
The problem is that the SYShapeView object doesn’t know that it should redraw itself whenever its color property changes. You could add a [shapeView setNeedsDisplay] statement to -changeColor:, but that’s a bit of a hack. I’m astrong believer that view objects should trigger their own redrawing when any properties that change their appearance are altered. That way, client code doesn’t have to worry
about whether to send -setNeedsDisplay or not; the viewwill take care of that automatically.
Return to SYShapeView.m and add the following method:
- (void)setColor:(UIColor *)color
{
_color = color;
[self setNeedsDisplay];
}
This method replaces the default setter method created by the color property. The new method updates the _color instance variable (which is all the old setter method did), but also sends itself a -setNeedsDisplay message.Now whenever you change the view’s color, it will immediately redraw itself.
Run the app and try the double-tap again. That’s much better!
Finally, you get to the part of the demonstration that rearranges the view. Overlap some views and then triple-tap one of the top views. Do you see the difference when the view is pushed to the back?
What is that, you say? The color changed when you triple-tapped it?
Oh, for Pete’s sake, don’t any of these gesture recognizer things works? Well, actually they do, but you’ve created an impossible situation. You’ve attached both a double-tap and a triple-tap gesture recognizer to the sameview. The problem is that there’s no coordination between the two. What’s happening is that the double-tap recognizer fires as soon as you tap the second time, before the triple-tap recognizers gets a chance to see the thirdtap.
There are a number of ways to fix this bug, but the most common recognizer conflicts can be fixed with one line of code. Return to the SYViewController.m file, find the -addShape: method, and locate the code that adds thedouble- and triple-tap recognizers. Immediately after that, add this line:
[dblTapGesture requireGestureRecognizerToFail:trplTapGesture];
This message creates a dependency between the two recognizers. Now, the double-tap recognizer won’t fire unless the triple-tap recognizer fails. When you tap twice, the triple-tap recognizer will fail (it sees two taps, but nevergets a third). This creates all of the conditions needed for the double-tap recognizer to fire. If you triple-tap, however, the triple-tap recognizer is successful, which prevents the double-tap from firing. Simple.
Now run your app for the last time. Resize and overlap some shapes. Triple-tap on a top shape to push it to the back and marvel at the results, shown in Figure 11-13.
Figure 11-13. Working Shapely app
By now you should have a firm grasp of how view objects get drawn, when, and why. You understand the graphics context, Bézier paths, the coordinate system, color, a little about transparency, 2D transforms, andeven how to create simple animations. That’s a lot.
One thing you haven’t explored much are images. Let’s get to that by going back in time.
Images and Bitmaps
When you’re drawing into a graphics context, one of the things you don’t have access to are the individual pixels of your own creation. So you can fill the view with a color, but you can’t ask the context what color a particularpixel was set to. The reason for this is encapsulation—there’s that word again. Your code can’t assume how, or even when, things actually get drawn. In all likelihood, your view is being drawn by a GPU into display memory your program doesn’t even have access to.
This can be awkward when you want to work with the individual pixels of an image. If you need to do that, you’ll have to allocate memory for those pixels. You can then manipulate those pixels directly,
or use the graphics drawing function to “paint” into your pixel array.
Creating Images from Bitmaps
You already used the first method in the ColorModel app you wrote back in Chapter 8. In it, the CMColorView class was eventually rewritten to display a hue/saturation color field. It did that by constructing an image objectusing a formula for the colors of each individual pixel. I’ve extracted the topical portion of that code, which you’ll find in Listing 11-2.
Listing 11-2. Image creation code from ColorModel
@interface CMColorView ()
{
}
@end
...
CGImageRef hsImageRef; float brightness;
- (void)drawRect:(CGRect)rect
{
CGRect bounds = self.bounds;
CGContextRef context = UIGraphicsGetCurrentContext();
if (hsImageRef!=NULL &&
( brightness!=_colorModel.brightness || bounds.size.width!=CGImageGetWidth(hsImageRef) || bounds.size.height!=CGImageGetHeight(hsImageRef) ) )
{ CGImageRelease(hsImageRef); hsImageRef = NULL;
}
if (hsImageRef==NULL)
{
brightness = _colorModel.brightness; NSUInteger width = bounds.size.width; NSUInteger height = bounds.size.height; typedef struct {
uint8_t red; uint8_t green; uint8_t blue; uint8_t alpha;
} Pixel;
NSMutableData *bitmapData = [NSMutableData dataWithLength:sizeof(Pixel)
*width*height];
for ( NSUInteger y=0; y<height; y++ )
{
for ( NSUInteger x=0; x<width; x++ )
{
UIColor *color = [UIColor colorWithHue:(float)x/(float)width saturation:1.0f-(float)y/(float)height brightness:brightness
alpha:1];
float red,green,blue,alpha;
[color getRed:&red green:&green blue:&blue alpha:&alpha]; Pixel *pixel = ((Pixel*)bitmapData.bytes)+x+y*width;
pixel->red = red*255; pixel->green = green*255; pixel->blue = blue*255; pixel->alpha = 255;
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGDataProviderRef provider = CGDataProviderCreateWithCFData(
( bridge CFDataRef)bitmapData); hsImageRef = CGImageCreate(width,height,
8,32,width*4,colorSpace, kCGBitmapByteOrderDefault,provider,NULL, false,kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace); CGDataProviderRelease(provider);
}
CGContextDrawImage(context,bounds,hsImageRef);
...
}
The CMColorView object keeps the finished image in its hsImageRef variable (a Core Graphics image reference, equivalent to an image object reference in Objective-C). It uses this image to draw the background of the view usingCGContextDrawImage, the last statement in
Listing 11-2.
This is done because creating the image requires a lot of work. To avoid doing that work unnecessarily, the
finished image is stored in the object and reused whenever possible.This technique is called caching.
The only time the image can’t be used is (a) the very first time the view is drawn and (b) if anything about the view changes so that the saved image can’t be used. This is what the first block of code is all about. It determines ifthe view has an image already, and if that image is still correct. If either isn’t true, then it makes a new one.
The real work begins with the if (hsImageRef==NULL) statement. This block of code creates a new image from a bunch of individual pixel values. To do this, you must arrange the pixels in memory
in a fashion that Core Graphics can understand. There are a number of formats that Core Graphics supports, but the most common is the red-green-blue-alpha (RGBA) format.
An RGBA image is a two-dimensional array of pixel values. Each pixel is represented by four (8-bit) bytes. Each byte is in unsigned integer value between 0 and 255. The first byte is the red value
(or component) of the pixel, the next the green value, then the blue value, and finally the alpha (opacity) value.
The first three combine to define the color of the pixel and the last determines its transparency,
0 being transparent and 255 being completely opaque.
An image that’s 100 pixels high by 100 pixels wide will require a 40,000 (100•100•4) byte array.
That’s what the code leading up the creation of the NSMutableData object (bitmapData) is doing.
It’s calculating the number of pixels the image occupies, and then it allocates four bytes for each one (sizeof(Pixel)*width*height).
The next block of code spins in a loop, calculating the value for each pixel. When all of the pixels in the
array have been set, it’s time to turn this gigantic array of numbers into an image. That is a three-step process:
1. Obtain a color model.
2. Create an image data provider.
3. Create an image from the data provider using the color model.
The reason this is so convoluted is that there are lots of sources for image data (memory, resource files, network connections, and so on), and iOS needs to know what the color model is (RGB, HSL, CMYK, and so on). Foryour app, use the default RGB color model. The source of the image data is the bytes in the array you just filled in.
The function that does the work is CGImageCreate. The parameters describe the number of pixels in the image, the number of pixels in each row of the array (which might not be the same), the number of bytes that representeach pixel (4), the data provider, the color model, and a hint about how you want the image rendered.
If you don’t have any particular opinion on that matter, pass kCGRenderingIntentDefault.
That’s it! Now you have a CGImageRef that’s the image (object) created from the pile of pixel values in the array.
Creating Bitmaps From Drawings
You can also go the other direction—turning an image or drawing into a bunch of pixels—and there are two techniques, depending on what you want to do with the results.
The simplest, and recommended, technique is to call the UIGraphicsBeginImageContext function to create a graphics context using a block of memory (which it conveniently allocates for you). You only need to tell it how big of adrawing area you want.
You then immediately start drawing into the context, just as if you were responding to a -drawRect: message. All of the drawing functions work, and their results are written into the temporary memory buffer. When you’re finisheddrawing, call UIGraphicsGetImageFromCurrentImageContext and iOS will return a new UIImage object containing the results of what you just drew.
You’ll use this technique in Chapter 13.
When you’re done, call UIGraphicsEndImageContext to dismantle the context and discard the temporary buffer.
While this technique is useful for turning any drawing into an image, it still doesn’t give you access
to the individual pixels of what was drawn; you can’t get that from the context or the UIImage object.
If you’re on a pixel hunt, you’ll need to use an even lower level function, CGBitmapContextCreate.
CGBitmapContextCreate creates a drawing context (just like UIGraphicsBeginImageContext), but the buffer is an array of bytes you supply, exactly as you did earlier in CMColorView. When the context is created, any drawing youperform is poured straight into that array. When you’re done drawing, you can do anything with the resulting pixels that you want: count the number of black pixels, find the darkest and lightest pixel, you name it.
All of these techniques, and the extensive list of pixel formats supported, are described in the
Quartz 2D Programming Guide you’ll find in Xcode’s Documentation and API Reference.
Advanced Graphics
Oh, there’s more. Before your head explodes from all of this graphics talk, let me briefly mention a few more techniques that could come in handy.
Text
You can also draw text directly into your custom view. The basic technique is:
1. Create a UIFont object that describes the font, style, and size of the text.
2. Set the drawing color.
3. Send an NSString object any of its -drawAtPoint:... or -drawInRect:...
messages.
You can also get the size that a string would draw (so you can calculate how much room it will take up) using the various -sizeWithFont:... methods.
You’ll find examples of this in the Touchy app you wrote in Chapter 4 and later in the Wonderland app in Chapter 12. The -drawAtPoint: . . . and -drawInRect: . . . methods are just wrappers for the low-level text drawing functions, which are described in the “Text” chapter of the Quartz 2D Programming Guide. If you need precise control over text, read the Core Text Programming Guide.
Shadows, Gradients, and Patterns
You’ve learned to draw solid shapes and solid lines. Core Graphics is capable of a lot more. It can paint with patterns and gradients, and it can automatically draw “shadows” behind the shapes you draw.
You accomplish this by creating various pattern, gradient, and shadow objects, and then setting them in your current graphics context, just as you would set the color. Copious examples and sample code can be found in the Quartz 2D Programming Guide.
Blend Modes
Another property of your graphics context, and many drawing functions, is the blend mode. A blend mode determines how the pixels of what’s being drawn affect the pixels of what’s already in the context. Normally, the blendmode is kCGBlendModeNormal. This mode paints opaque pixels, ignores transparent ones, and blends the colors of partially transparent ones.
There are some two dozen other blend modes. You can perform “multiplies” and “adds,” paint only over the opaque portions of the existing image, paint only in the transparent portion the existing image, paint using “hard” or“soft” light, affect just the luminosity or saturation—the list goes on and on. You set the current blend mode using the CGContextSetBlendMode function. Some drawing methods take a blend mode parameter.
The available blend modes are documented, with examples, in two places, both in the Quartz 2D Programming Guide. For drawing operations (shapes and fills), refer to the “Setting Blend Modes” section of the “Paths” chapter.For examples of blending images, find the “Using Blend Modes with Images” section of the “Bitmap Images and Image Masks” chapter.
The Context Stack
All of these settings can start to make your graphics context hard to work with. Let’s say you need to draw a complex shape, with a gradient, drop shadow, rotated, and with a special blend mode. After you’ve set up all of thoseproperties and drawn the shape, now you just want to draw a simple line. Yikes! Do you now have to reset every one of those settings (drop shadow, transform, blend mode, and so on)?
Don’t panic, this is a common situation and there’s a simple mechanism for dealing with it. Before you make a bunch of changes, call the CGContextSaveGState function to save almost everything about the current graphicscontext. It takes a snapshot of your current context settings and pushes them onto a stack. You can then change whatever drawing properties you need (clipping region, line width, stroke color, and so on) and draw whatever you want.
When you’re done, call CGContextRestoreGState and all of the context’s setting will be immediately restored towhat they were when you called CGContextSaveGState.
You can nest these calls as deeply as you need: save,change, draw, save, change,
draw, restore, draw, restore, draw. It’s not uncommon, in complex drawing methods, to begin with a call to
CGContextSaveGState, so that later portions of the method can retrieve an unadulteratedgraphics context.
Summary
I think it’s time for a little celebration. What you’ve learned in this chapter is more than just some drawing mechanics.
Creating your own views, drawing your own graphics, and making your own animations, is like trading in your erector set for a lathe. You’ve just graduated from building apps using pieces that other people have made, to creating anything you can imagine.
I just hope the next chapter isn’t too boring after all of this freewheeling graphics talk. It doesn’t matter how cool your custom views are, users still need to get around your app. The next chapter is all about navigation.



No comments:
Post a Comment