Figure 16-1. Leveler design
Creating Leveler
Create a new Xcode project, as follows:
n Use the Single View Application template
n Product Name: Leveler
n Class Prefix: LR
n Devices: Universal
n After creating the project, edit the supported interface orientations to support all device orientations
Look up the word “leveler” for an interesting factoid on English history.
Leveler is going to need some image and source code resources. You’ll find the image files in the Learn iOS Development Projects ➤ Ch 16 ➤ Leveler (Resources) folder. Add the hand.png and hand@2x.png files to the
Images.xcassets image catalog. In the finished Leveler 1 project folder, locate the LRDialView.h and LRDialView.mfiles. Add them to your project too, alongside your other source files. Remember to check the Copy items intodestination group's folder option in the import dialog. You’ll also find a set of app icons in the Leveler (Icons)
folder that you can drop into the AppIcon group of the image catalog.
You’ll first lay out and connect the views that will display the inclination before getting to the code that
gathers the accelerometer data.
Pondering LRDialView
The source files you just added contain the code for a custom UIView object that draws a circular “dial.” After reading Chapter 11, you shouldn’t have any problem figuring out how it works. The most
interesting aspect is the useof affine transforms in the graphics context. In Chapter 11, you applied affine transforms to a view object, so it appeared either offset or scaled from its actual frame. In LRDialView, an affine transform is applied to the graphicscontext before drawing into it. Anything drawn afterwards is translated using that transform.
In LRDialView, this technique is used to draw the tick marks and angle labels around the inside of the “dial.” If you’re interested, find the -drawRect: method in LRDialView.m. The significant bits
of code are in bold, and irrelevant code has been replaced with ellipses:
#define kCircleDegrees
|
360
|
#define kMinorTickDegrees
...
|
3
|
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext(); CGRect bounds = self.bounds;
CGFloat radius = bounds.size.height/2;
...
CGContextTranslateCTM(context,radius,radius);
for ( NSUInteger angle=0; angle<kCircleDegrees; angle+=kMinorTickDegrees )
{
... draw one vertical tick and horizontal label ...
CGContextConcatCTM(context,
CGAffineTransformMakeRotation(kMinorTickDegrees*M_PI/180));
}
}
The -drawRect: method first applies a translate transform to the context.
This offsets the drawing coordinates,effectively changing the origin of the view’s local coordinate system to
the center of the view. (The view is alwayssquare, as you’ll see later.) After applying this transform,
if you drew a shape at (0,0), it will now draw at the center of the view, rather than the upper-left corner.
The loop draws one vertical tick mark and an optional text label below it. At the end of the loop, the drawingcoordinates of the context are rotated 3°. The second time through the loop the tick mark and label will be rotated 3°. The third time through the loop all drawing will be rotated 6°, and so on, until the entire dial has
been drawn. Context transforms accumulate.
The key concept to grasp is that transformations applied to the drawing context affect the coordinate system
of what’s being drawn into the view, as shown in Figure 16-2. Context transforms don’t change its frame, bounds, or where it appears in its superview.
Figure 16-2. Graphics context transformation
To change how the view appears in its superview, you set the transform property of the view, as you did in the Shapely app. And that’s exactly what the view controller will do (later) to rotate the dial on the screen.Thisunderscores the difference between using affine transforms while drawing and using a transform to alter how
the finished view appears.
Also note that the view only draws itself once. All of this complicated code in -drawRect: executes only when the view is first drawn or resized. Once the view is drawn, the cached image of the dial appears in the display andgets rotated by the view’s transform property. This second use of a transform simply transcribes the pixels in the cached image; it doesn’t cause the view to redraw itself at the new angle. In this
respect, the drawing is veryefficient. This is important, because later on you’re going to animate it.
Creating the Views
You’re going to add a label object to the Interface Builder file, and then write code in LRViewController to programmatically create the LRDialView and the image view that displays the “needle” behind the dial. Start with theMain_iPhone.storyboard (or _iPad) file.
Drag a label object into the interface. Using the attributes inspector, change the following: Text: 360° (press Option+Shift+8 to type the degree symbol)
Color: White Color
Font: System 60.0 (iPhone) or System 90.0 (iPad) Alignment: middle
Select the label object and choose Editor ➤ Size to Fit Content. Position the object so it is centered at the top of the interface. Select the root view object and change its background color to Black Color.
Select the label and add the following constraints:
1. Fix its width (Editor ➤ Pin ➤ Width)
2. Fix its height (Editor ➤ Pin ➤ Height)
3. Center it (Editor ➤ Align ➤ Horizontal Center in Container)
4. Control/right-drag to the Top Layout Guide and create a Vertical Spacing constraint
5. Using the attributes inspector, select the constraint and check its Standard
option
The finished interface should look like the one in Figure 16-3.
Figure 16-3. Leveler Interface Builder layout
Switch to the assistant view. With the LRViewController.h file in the right-hand pane, add this outlet property:
@property (weak,nonatomic) IBOutlet UILabel *angleLabel;
Connect the outlet to the label view in the interface, as shown in Figure 16-4.
Figure 16-4. Connecting angle label outlet
You’ll create and position the other two views programmatically. Switch back to the standard editor and
select the LRViewController.m file. You’ll need the definition of the LRDialView class and the name of the imageresource file,so add the following #import and #define declarations immediately after the existing #import
directives:
#import "LRDialView.h"
#define kHandImageName @"hand"
You’ll also need some instance variables to keep a reference to the dial and image view objects and a method to position them. Add those to the private @interface section (new code in bold):
@interface LRViewController ()
{
LRDialView *dialView; UIImageView *needleView;
}
- (void)positionDialViews;
@end
Create the two views when the view controller loads its view. Since this is the app’s only view controller, this will only happen once. Find the -viewDidLoad method and add the following bold code:
- (void)viewDidLoad
{
[super viewDidLoad];
dialView = [[LRDialView alloc] initWithFrame:CGRectMake(0,0,100,100)]; [self.view addSubview:dialView];
needleView = [[UIImageView alloc]
initWithImage:[UIImage imageNamed:kHandImageName]]; needleView.contentMode = UIViewContentModeScaleAspectFit;
[self.view insertSubview:needleView belowSubview:dialView];
}
When the view is loaded, the additional code creates new LRDialView and UIImageView objects, adding both
to the view. Notice that the needleView is deliberately placed behind dialView.
The dial view is partially transparent,allowing the needleView to show through it.
No attempt is made to size or position these views. That happens when the view is displayed or rotated. Catch those events by adding these two methods:
- (void)viewWillAppear:(BOOL)animated
{
[self positionDialViews];
}
- (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromOrientation
{
[self positionDialViews];
}
Just before the view appears for the first time, and whenever the view is rotated to a new orientation, reposition the dialView and needleView objects. You’ll also need to add this method for the iPhone version:
- (NSUInteger)supportedInterfaceOrientations
{
return UIInterfaceOrientationMaskAll;
}
While you edited the supported orientations for the app, remember (from Chapter 14) that each view controllerdictates which orientations it supports. By default, the iPhone’s UIViewController does not support upside-downorientation. This code overrides that to allow all orientations.
Finally, you’ll need the code for -positionDialView:
- (void)positionDialViews
{
CGRect viewBounds = self.view.bounds; CGRect labelFrame = self.angleLabel.frame;
CGFloat topEdge = CGRectGetMaxY(labelFrame)+labelFrame.size.height/3; CGFloat dialHeight = ceilf((CGRectGetMaxY(viewBounds)-topEdge)*2); dialView.transform = CGAffineTransformIdentity;
dialView.frame = CGRectMake(0, 0, dialHeight, dialHeight); dialView.center = CGPointMake(CGRectGetMidX(viewBounds),
CGRectGetMaxY(viewBounds));
[dialView setNeedsDisplay];
CGSize needleSize = needleView.image.size;
CGFloat needleScale = (dialHeight/2)/needleSize.height;
CGRect needleFrame = CGRectMake(0,0, needleSize.width*needleScale, needleSize.height*needleScale);
needleFrame.origin.x = CGRectGetMidX(viewBounds)-needleFrame.size.width/2; needleFrame.origin.y = CGRectGetMaxY(viewBounds)-needleFrame.size.height; needleView.frame = CGRectIntegral(needleFrame);
}
This looks like a lot of code, but all it’s doing is sizing the dialView so it is square, positioning its center at thebottom center of the view, and sizing it so its top edge is just under the bottom edge of the label view. TheneedleView is then positioned so it’s centered and anchored to the bottom edge, and scaled so its height equals the visible height of the dial. This is a lot harder to describe than it is to see, so just run the app and see what Imean in Figure 16-5.
Figure 16-5. Dial and needle view positioning
That pretty much completes all of the view design and layout. Now you need to get the
accelerometer information and make your app do something.
Getting Motion Data
All iOS devices (as of this writing) have accelerometer hardware. The accelerometer senses the force of acceleration along three axes: X, Y, and Z. If you face the screen of your iPhone or iPad in portrait orientation, the X-axis is horizontal, the Y axis is vertical, and the Z axis is the line that goes from you, through the middle of thedevice, perpendicular to the screen’s surface.
You can use accelerometer information to determine when the device changes speed and in what direction. Assuming it’s not accelerating (much), you can also use this information to infer the direction of gravity, since
gravityexerts a constant force on a stationary body. This is the information iOS uses to determine when you’ve flipped your iPad on its side or when you’re shaking your iPhone.
In addition to the accelerometer, recent iOS devices also include a gyroscope and a magnetometer. The former detects changes in rotation around the three axes (pitch, roll, yaw) and the magnetometer detects the orientation ofa magnetic
field. Barring magnetic interference, this will tell you the device’s attitude relative to magnetic North. (Which is a fancy way of saying it has a compass.)
Your app gets to all of this information through a single gatekeeper class: CMMotionManager. The CMMotionManager class collects, interprets, and delivers movement and attitude information to your app.
You tell it what kind(s)of information you want (accelerometer, gyroscope, compass), how often you want to
receive updates, and how those updates are delivered to your app.
Your Leveler app will only use accelerometer information, but the generalpattern is the same for all types of
motion data:
1. Create an instance of CMMotionManager
2. Set the frequency of updates
3. Choose what information you want and how your app will get it (pull or push)
4. When you’re ready, start the delivery of information
5. Process motion data as it occurs
6. When you’re done, stop the delivery of information There’s no better place to start than step 1.
Creating CMMotionManager
Before all of the other #import statements in LRViewController.m, pull in the CoreMotion framework definitions:
#import <CoreMotion/CoreMotion.h>
You’ll need to specify how fast you want motion data updates. For neatness, define this as a constant, just after the #import statements:
#define kAccelerometerPollingInterval (1.0/15.0)
You will need an instance variable to store the CMMotionManager object reference and methods to process the motion data and rotate the dial. Add those to your private interface (new code in bold):
@interface LRViewController ()
{
CMMotionManager *motionManager; LRDialView *dialView; UIImageView *needleView;
}
- (void)positionDialViews;
- (void)updateAccelerometerTime:(NSTimer*)timer;
- (void)rotateDialView:(double)rotation;
@end
Locate the -viewDidLoad: method and add this code to the end of the method:
motionManager = [CMMotionManager new]; motionManager.accelerometerUpdateInterval = kAccelerometerPollingInterval;
You’ve completed the first two steps in using motion data. The first statement creates a new CMMotionManager object and saves it in your motionManager
instance variable.
The next statement tells the manager how long to wait between measurements. This property is expressed in seconds. For most apps, 10 to 30 times a second is adequate, but extreme apps might need updates as often as100 times a second. For this app, you’ll start with 15 updates per second by setting the
accelerometerUpdateInterval property to 1/15th of a second.
Starting and Stopping Updates
To perform the third and fourth steps in getting motion data, locate the -viewWillAppear: method and add this statement (new code in bold):
- (void)viewWillAppear:(BOOL)animated
{
[self positionDialViews];
[motionManager startAccelerometerUpdates];
}
Just before the view appears, you request that the motion manager begin collecting accelerometer data.
The accelerometer information reported by CMMotionManager won’t be accurate—or even change—until you begin itsupdate process. Once started, the motion manager code works tirelessly in the background to
monitor any changes in acceleration and report those to your app.
Push Me, Pull You
It might not look you like you’ve performed the third step in getting motion data, but you did. It was implied when you sent the -startAccelerometerUpdates message. This method starts gathering motion data, but it’s up to your app to periodically ask what those values are. This is called the pull approach; the CMMotionManager object keeps the motion data current and your app pulls the data from it as needed.
The alternative is the push approach. To use this approach, send the startAccelerometerUpdates ToQueue:withHandler: message instead. You pass it an operation queue (that I’ll explain in Chapter 24) and a code block
that gets executed the moment motion data is updated. This is much more complicated to implement
because the code block is executed on a separate thread, so all of your motion data handling code must
be thread safe. You really only need this approach if your app must, absolutely, positively, process motion
data the instant it becomes available. There are very few apps that fall into this category.
Timing is Everything
Now you’re probably wondering how your app “periodically” pulls the motion data it’s interested in.
The motion manager doesn’t post any notifications or send your object any delegate messages. What you need is an object that will remind your app to do something at regular intervals. It’s called a timer, and
iOS providesjust that. At the end of the -viewWillAppear: method, add this statement:
[NSTimer scheduledTimerWithTimeInterval:kAccelerometerPollingInterval target:self
selector:@selector(updateAccelerometerTime:) userInfo:nil
repeats:YES];
An NSTimer object provides a timer for your app. It is one of the sources of events that I mentioned in Chapter 4, but never got around to talking about.
Timers come in two flavors: single shot or repeating. A timer has a timeInterval property and a message it will
send to an object. After the amount of time in the timeInterval property has passed, the timer fires. At the nextopportunity, the event loop will send the target object the timer’s message. If it’s a one shot timer, that’s it; the
timer becomes invalid and stops. If it’s a repeating timer, it continues running, waiting until another timeIntervalamount of time has passed before firing again. A repeating timer continues to send messages until you send it an -invalidate message.
The code you added to -viewWillAppear: creates and schedules a timer that sends your view controller object an -updateAccelerometerTime: message approximately 15 times a second. This is the same rate that the motionmanager is updating its accelerometer information. There’s no point in checking for updates any faster, or slower, than the CMMotionManager object is gathering them.
Everything is in place, except the -updateAccelerometerTime: and -rotateDialView: methods. While still in LRViewController.m, add the first method:
- (void)updateAccelerometerTime:(NSTimer *)timer
{
CMAcceleration acceleration = motionManager.accelerometerData.acceleration; double rotation = atan2(-acceleration.x,-acceleration.y);
[self rotateDialView:rotation];
}
The first statement retrieves the accelerometerData property of the motion manager. Since you only started the gathering of accelerometer information, this is the only motion data property that’s valid. This property is aCMAccelerometerData object, and that object only has one property: acceleration. The acceleration property—which is a structure, not an object—contains three numbers: x, y, and z. Each value is the instantaneous force beingexerted along that axis, measured in G’s.3 Assuming the device isn’t being moved around, the measurements can be combined to determine the gravitational vector; in other words, you can figure out which way is down.
Your app doesn’t need all three. You only
need to determine which direction is
up in the X-Y plane,
because that’s where the dial lives. Ignoring the force along the Z axis, the arctangent function calculates the angle of
thegravitational
vector in the X-Y plane. The result is used to rotate the dialView by that same angle. Simple, isn’t it?
Complete the app by writing the -rotateDialView: method:
- (void)rotateDialView:(double)rotation
{
dialView.transform = CGAffineTransformMakeRotation(rotation);
NSInteger degrees = round(-rotation*180.0/M_PI); if (degrees<0)
degrees+=360;
_angleLabel.text = [NSString stringWithFormat:@"%u\u00b0",(unsigned)degrees];
}
3G is the force of gravity, equal to an acceleration of approximately 9.81 meters per second every second.
This method turns the rotation parameter into an affine transform that rotates dialView. The last block of code converts the rotation value from radians into degrees, makes sure it’s not negative, and uses that to
update thelabel view.
It’s time to plug in your provisioned iOS device, run your app, and play with the results, as shown in Figure 16-6. Notice how the app switches orientation as you rotate it. If you lock the device’s orientation, it won’t do that,but the dial still works.
Figure 16-6. Working Leveler app
Herky-Jerky
Your app works, and it was pretty easy to write, but boy is it hard to look at. If it works anything
like the way is does on my devices, the dial jitters constantly. Unless the device is perfectly still, it’s almost impossible to read.
It would be really nice if the dial moved more smoothly—a lot more smoothly. That sounds like a job for animation. What you want is an animation that makes the dial appear to have mass, gently drifting towards theinstantaneous inclination reported by the hardware.
So what are your choices? In the past, you’ve used Core Animation to smoothly move views around.
But Core Animation is like a homing pigeon; you take it to where you want it to start, tell it where
you want it end up, and let it go. Once started, it’s not designed to make in-flight course corrections should the end point change. And that’s exactly what will happen when new accelerometer information is received.
You could try to smooth out the updates yourself by clamping the rate at which the view is rotated.
To make it look really nice, you might even go so far as to create a simple physics engine that
gives the dial simulated mass, acceleration, drag, and so on. But as I mentioned in chapter 11, the do-it-yourself approach to animation is fraught with complications, is usually a lot of work, and often results to sub-standardperformance.
Lucky for you, iOS 7 introduced view dynamics. View dynamics is a new animation service that endows your views with a simulated physicality that mimics mass, gravity, acceleration, drag, collisions, and so on.
Viewdynamics takes a substantially different approach to animation. Instead of describing what you want the animation to do move this many pixels, rotate that many degrees, and so on you describe the “forces” actingon a view and let the dynamic animator create an animation that simulates the view’s reaction to those forces.
Using Dynamic Animation
Dynamic animation involves three players:
n The dynamic animator object
n One or more behavior objects
n One or more view objects
The dynamic animator is the object that performs the animation. It contains a complex physic engine that’s remarkably intelligent. You’ll need to create a single instance of the dynamic animator.
Animation occurs when you create behavior objects and add those to the dynamic animator.
A behavior describes a single impetus or attribute of a view. iOS includes predefined behaviors for gravity, acceleration, friction, collisions, connections, and more, and you’re free to invent your own. A behavior is associatedwith one or more view (UIView) objects, imparting that particular behavior to all of its views. The dynamic animator does the work of combining multiple behaviors for a single view—acceleration plus gravity plus friction, forexample—to construct the animation for that view.
So the basic formula for dynamic animation is:
1. Create an instance of UIDynamicAnimator
2. Create one or more UIDynamicBehavior objects, attached to UIView objects
3. Add the UIDynamicBehavior objects to the UIDynamicAnimator
4. Sit back and enjoy the show
You’re now ready to add view dynamics to Leveler.
Creating the Dynamic Animator
You’ll need to create a dynamic animator object, and for that you’ll need an instance variable to save it in. Find the private @interface LRViewController () declaration in LRViewController.m. Add an instance variable for your animator (new code in bold). While you’re here, add some constants and a variable to contain an attachment behavior, all of which will be explained shortly:
#define kSpringAnchorDistance
|
4.0
|
#define kSpringDamping
|
0.7
|
#define kSpringFrequency
|
0.5
|
@interface LRViewController ()
{
|
CMMotionManager *motionManager;
LRDialView *dialView; UIImageView *needleView; UIDynamicAnimator *animator; UIAttachmentBehavior *springBehavior;
}
You’ll need to create the dynamic animator, create the behavior objects you want, and connect those to your views. The perfect place to do all three is in the -positionDialViews method. Find the
-positionDialViews method and add this code to the very beginning (new code in bold):
- (void)positionDialViews
{
if (animator!=nil)
[animator removeAllBehaviors]; else
animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view];
This code simply determines if a UIDynamicAnimator objects has already been created or not. If it has, it resets it by removing any active behaviors. If it hasn’t, it creates a new dynamic animator.
When you create a dynamic animator, you must specify a view that will be used to establish the coordinate system the dynamic animator will use. The dynamic animator uses its own coordinate system, called the referencecoordinate system, so that view objects in different view hierarchies (each with their own coordinate
system) can interact with one another in a unified coordinate space. Using the reference coordinate system
you could, forexample, have a view in your content view controller collide with a button in the toolbar, even though it resides in an unrelated superview.
For your app, make the reference coordinate system that of your view controller’s root view. This makes all dynamic animator coordinates the same as your local view coordinates. Won’t that be convenient?
(Yes, it will.)
Defining Behaviors
So what behavior(s) do you think the dial view should have? If you look through the behaviors supplied by iOS, you won’t find a “rotation” behavior. But the dynamic animator will rotate a view, if the forces acting on that view would cause it to rotate. Rotating the dial view, therefore, isn’t any more difficult that rotating a record platter, a merry go round, a lazy susan, or anything similar: anchor the center of the objectand apply an oblique force toone edge.
You’ll accomplish this using two attachment behaviors. An attachment behavior connects a point in your view with either a similar point in another view or a fixed point in space, called an anchor. The length of the attachmentcan be inflexible, creating a “towbar” relationship that keeps the attachment point at a fixed distance, or it can be flexible, creating a “spring” relationship that tugs on the view when the other end of the attachment moves. To rotate the dial view, you’ll use one of each, as shown in Figure 16-7.
Figure 16-7. dialView attachment behaviors
Create the first behavior by adding this code to the end of the -positionDialViews method:
CGPoint dialCenter = dialView.center; UIAttachmentBehavior *pinBehavior;
pinBehavior = [[UIAttachmentBehavior alloc] initWithItem:dialView attachedToAnchor:dialCenter];
[animator addBehavior:pinBehavior];
The attachment behavior defines a rigid attachment from the center of the dial view to a fixed anchor point, at the exact same location. When you create an attachment behavior, the distance between the two attachment points defines its initial length, which in this case is 0. Since the attachment is inflexible and its length is 0, the net effect is to pin the center of the view at that coordinate. The view’s center can’t move from that spot.
All that remains is to add that behavior to the dynamic animator. All by itself, this doesn’t accomplish much, except to prevent the view from being moved to a new location. Things get interesting when you add a secondattachment behavior, using the following code:
CGRect dialRect = dialView.frame;
CGPoint topCenter = CGPointMake(CGRectGetMidX(dialRect), CGRectGetMinY(dialRect));
springBehavior = [[UIAttachmentBehavior alloc] initWithItem:dialView
offsetFromCenter:UIOffsetMake(0,topCenter.y-dialCenter.y)
attachedToAnchor:topCenter]; springBehavior.damping = kSpringDamping; springBehavior.frequency = kSpringFrequency; [animator addBehavior:springBehavior];
The first two statements calculate the point at the top center of the view. A second attachment behavior is created. This time the attachment point is not in the center of the view, but at its top center
(expressed as an offset from its center).
Again, the anchor point is the same location as the attachment point, creating a zero length attachment.
What’s different is that the damping and frequency properties are then set to something other than their default values.This creates a “springy” connection between the anchor point and the attachment point. But since the anchor and the attachment point are currently the same, no force is applied (yet).
Animating the Dial
The stage is set and all of the players are in place. You’ve defined a behavior that pins the center of the dial toa specific position, and a second that will “tug” the top center point towards a second anchor point.
The actionbegins when you move that second anchor point, as shown in Figure 16-7.
Locate the rotateDialView: method and delete the first statement, the one that created an affine transform
and applied it to the view. Replace that code with the following (new code in bold):
- (void)rotateDialView:(double)rotation
{
CGPoint center = dialView.center;
CGFloat radius = dialView.frame.size.height/2 + kSpringAnchorDistance; CGPoint springPoint = CGPointMake(center.x+sin(rotation)*radius,
center.y-cos(rotation)*radius); springBehavior.anchorPoint = springPoint;
Instead of the traditional approach of telling the graphics system what kind of change you want to see (rotate the view by a certain angle), you describe a change to the physical environment and let the dynamic animator simulate the consequences. In this app, you moved the anchor point attached to the top center point of the view. Moving the anchor point creates an attraction between the new anchor point and the attachment point in theview. Since the center of the view is pinned by the first behavior, the only way the top point of the view can get closer to the new anchor point is to rotate the view, and that’s exactly what happens.
Run the app and see the effect. The dial acts much more like a “real” dial. There’s acceleration,
deceleration, and even oscillation. These effects are all courtesy of the physics engine in the dynamic
animator.
Try altering the values of kSpringAnchorDistance, kSpringDamping, and kSpringFrequency and observe how this affects the dial. For extra credit, add a third behavior that adds some “drag” to the dial. Create aUIDynamicItemBehavior object, associate it with the dial view, and set its angularResistance
property to something other than 0; I suggest starting with a value of 2.0. Don’t forget to add the
finished behavior to the dynamicanimator.
You now have a nifty inclinometer that’s silky smooth and fun to watch. Now that you know how
easy it is to add motion data to your app, and simulate motion using view dynamics, let’s take a look at some of the other sources of motion data.
Getting Other Kinds of Motion Data
As of this writing, there are three other kinds of motion data your app can use. You can collect and use the other kinds of data instead of, or in addition to, the accelerometer data. Here are the kinds of motion data iOS provides:
n Gyroscope: measures the rate at which the device is being rotated around its three axes
n Magnetometer: measures the orientation of the surrounding magnetic field
n Device Motion: combines information from the accelerometer, magnetometer, and gyroscope to produce useful values about the device’s motion and position in space
Using the other kinds of motion data is identical to what you’ve done with the accelerometer data, with one exception. Not all iOS devices have a gyroscope or a magnetometer. You will have to decide if your app must havethese capabilities, or can function in their absence. That decision will dictate how you configure your app’s project and write your code. Let’s start with the gyroscope.
Gyroscope Data
If you’re interested in the instantaneous rate at which the device is being rotated—logically equivalent to the accelerometer data, but for angular force—gather gyroscope data. You collect gyroscope data almost exactlyas you do accelerometer data. Begin by setting
the gyroUpdateInterval property of the motion manager object, and then send it either a
-startGyroUpdates or -startGyroUpdatesToQueue:withHandler: message.
The gyroData property returns a CMGyroData object, which has a single rotationRate property value. This property is a struct (just like CMAcceleration) with three values: x, y, and z. Each value is the rate of rotation around thataxis, in radians per second.
You must consider the possibility that the user’s device doesn’t have a gyroscope. There are two approaches:
n If your app requires gyroscopic hardware to function, add the gyroscope value to the
Required Device Capabilities collection of your app’s property list.
n If you app can run with, or without, a gyroscope, test the gyroAvailable property of the motion
manager object.
The first approach makes the gyroscope hardware a requirement for your app to run. If added to your app’s property list, iOS won’t allow the app to be installed on a device that lacks a gyroscope. The App Store may hide theapp from users that lack a gyroscope, or warn them that your app may not run on their device.
If your app can make use of gyroscope data, but could live without it, test for the presence of a gyroscope by reading the gyroAvailable property of the motion manager object. If it’s YES, feel free to start and use the gyroscopedata. If it’s NO, make other arrangements.
Magnetometer Data
The magnitude and direction of the magnetic field surrounding your device is available via the magnetometer data. By now, this is going to sound like a broken record:
n Set the frequency of magnetometer updates using the
magnetometerUpdateInterval property.
n Start magnetometer measurements using the -startMagnetometerUpdates or -st artMagnetometerUpdatesToQueue:withHandler: messages.
n The magnetometerData property returns a CMMagnetometerData object with the current readings.
n The CMMagnetometerData object’s sole property is the magneticField property, a structure with three values: x, y, and z. Each is the direction and strength of the field along that axis, in mT (microteslas).
n Either add the magnetometer value to your app’s Required Device Capabilities property or check the magnetometerAvailable property to determine if the device has one.
Like the accelerometer and gyroscope data, the magnetometerDate property returns the raw, unfiltered, magnetic field information. This will be a combination of the Earth’s magnetic field, the device’s own magnetic bias, anyambient magnetic fields, magnetic interference, and so on.
Teasing out magnetic North from this data is a little tricky. What looks like North might be a microwave oven. Similarly, the accelerometer data can change because the device was tilted,
or because it’s in a moving car, or both. You can unravel some of these conflicting indicators by collecting and correlating data from multiple instruments. For example, you can tell the difference between a tilt and a horizontal movement by examining the changes to both the accelerometer and gyroscope; a tilt will change both, but a horizontal movement will only register on the accelerometer.
If you’re getting the sinking feeling that you should have been paying more attention in your physics and math classes, you can relax; iOS has you covered.
Device Motion and Attitude
The CMMotionManager also provides a unified view of the device’s physical position and movements through its device motion interface. The device motion properties and methods combine the information from theaccelerometer, gyroscope, and sometimes the magnetometer. It assimilates
all of this data and produces a filtered, unified, calibrated picture of the device’s motion and position in space.
You use device motion in much the way you used the preceding three instruments:
n Set the frequency of device motion updates using the
deviceMotionUpdateInterval property.
n Start device motion update by sending a -startDeviceUpdates,
-startDeviceMotionUpdatesToQueue:withHandler:,
-startDeviceMotionUpdatesUsingReferenceFrame:, or -startDeviceMotion UpdatesUsingReferenceFrame:toQueue:withHandler: message.
n The deviceMotion property returns a CMDeviceMotion object with the current motion and attitude information.
n Determine if device motion data is available using the deviceMotionAvailable
property.
There are two big differences between the device motion and previous interfaces. When starting updates, you can optionally provide a CMAttitudeReferenceFrame constant that selects a frame of reference for the device. Thereare four choices:
n Direction of the device is arbitrary
n Direction is arbitrary, but use the magnetometer to eliminate “yaw drift”
n Direction is calibrated to magnetic North
n Direction is calibrated to true North (requires location services)
The neutral reference position of your device can be imaged by placing your iPhone or iPad flat on a table in front of you, with the screen up, and the home button towards you. The line from the home button to the top of thedevice is the Y-axis. The X-axis runs horizontally from the left side to the right. The Z-axis runs through the device, straight up and down. Spinning your device, while still flat on the table, changes its direction. It’s this direction that the reference frame is concerned with. If the direction doesn’t matter, you can use either of the arbitrary reference frames. If you needto know the direction in relationship to true or
magnetic North, use one of the calibrated reference frames.
The second big difference is the CMDeviceMotion object. Unlike the other motion data objects, this one
has several properties, listed in Table 16-1.
Table 16-1. Key CMDeviceMotion properties
Property Description
attitude A CMAttitude object that describes the actual attitude (position in space) of the device described as a triplet of property values (pitch, roll, and yaw). Additional properties describe the same information in mathematicallyequivalent forms, both as a rotation matrix and a quaternion.
rotationRate A structure with three values (x, y, and z) describing the rate of rotation around those axes.
userAcceleration A CMAcceleration structure (x, y,
and z) describing the motion of the device.
magneticField A CMCalibratedMagneticField structure (x, y, z, and accuracy) that describes the direction
of the Earth’s magnetic field.
n The attitude property combines information from the gyroscope to measure changes in angle, the accelerometer to determine the direction of gravity, and it may also use the magnetometer to calibratedirection (rotation around the
Z-axis) and prevent drift.
n The userAcceleration property correlates accelerometer and gyroscope data, excluding the force of gravity and changes in attitude, to provide an accurate measurement of acceleration.
n The magneticField property adjusts for the device bias and attempts to compensate for magnetic interference.
In all, the device motion interface is much more informed and intelligent. If there’s a downside, it’s that it requires more processing power, which steals app performance and battery life. If all your app needs is a general idea
ofmotion or rotation, then the raw data from the accelerometer or gyroscope is all you need. But if you really
want toknow the device’s position, direction, or orientation, then the device motion interface has it figured out for you.
Measuring Change
If your app needs to know the rate of change of any of the motion measurements, it needs time information. For example, to measure the change in angular rotation you’d subtract the current rate from the previou
rate, anddivide that by the time delta between the two samples.
But where can you find out when these measurements were taken? In earlier sections I wrote, “CMAccelerometerData’s only property is acceleration,” along with similar statements about CMGyroData andCMMagnetometerData. That’s not strictly true.
The CMAccelerometerData, CMGyroData, CMMagnetometerData, and CMDeviceMotion classes are all subclasses of CMLogItem. The CMLogItem class defines a timestamp property, which all of the aforementioned classesinherit.
The timestamp property records the exact time the measurement was taken, allowing your app to
accurately compare samples and calculate their rate of change, record them for posterity, or for
any other purpose you mightimagine.
Summary
In this chapter you’ve tapped into the unfiltered data of the device’s accelerometer, gyroscope, and
magnetometer. You know how to configure the data you want to collect, interpret it, and learned how to use timers tocollect it.You’ve also learned how to exploit the device motion data for a more informed view of the device’s position in space. There’s almost no motion or movement that your app can’t detect and react to.
Well, almost. Despite the incredibly detailed information about the direction of the device and how it’s beingmoved around, there’s still one piece of information missing: where the device is located. You’ll solve that remaining mystery in the next chapter.







No comments:
Post a Comment