<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Codethink &#187; iphone</title>
	<atom:link href="https://codethink.no-ip.org/tags/iphone/feed" rel="self" type="application/rss+xml" />
	<link>https://codethink.no-ip.org</link>
	<description>A blog about coding, life, and other arbitrary topics</description>
	<lastBuildDate>Sun, 15 Mar 2026 21:30:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.29</generator>
	<item>
		<title>[iOS] Jira Mobile Connect</title>
		<link>https://codethink.no-ip.org/archives/788</link>
		<comments>https://codethink.no-ip.org/archives/788#comments</comments>
		<pubDate>Fri, 04 Nov 2011 06:48:08 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[banter]]></category>
		<category><![CDATA[coding]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[jira]]></category>
		<category><![CDATA[plugin]]></category>
		<category><![CDATA[tools]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=788</guid>
		<description><![CDATA[Not long ago Atlassian released version 1.0 (now up to 1.0.7) of their Jira Mobile Connect plugin. This is a plugin for Jira (obviously) that aims to simplify testing, error-reporting, and feedback collection/management for iOS applications. Assuming that you are &#8230; <a href="https://codethink.no-ip.org/archives/788">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Not long ago Atlassian released version 1.0 (now up to 1.0.7) of their <a href="https://plugins.atlassian.com/plugin/details/322837" target="_blank">Jira Mobile Connect plugin</a>.  This is a plugin for Jira (obviously) that aims to simplify testing, error-reporting, and feedback collection/management for iOS applications.  Assuming that you are doing iOS software development and have a Jira server instance running (which you really should if you are doing any nontrivial amount of development work of any variety) then using this plugin in your apps is really a no-brainer.  Jira Mobile Connect includes a number of very cool features, such as the ability for users to attach annotated screenshots/images to their feedback reports, to record audio to attach with their feedback, and even to chat back and forth with the developer(s) working on their issue/ticket .  And of course it does basic crash logging and reporting, as well. </p>
<p>Previously if you wanted a free/open-source crash reporting framework for iOS your options were basically limited to <a href="http://quincykit.net/server.html" target="_blank">QuincyKit</a>, which is a serviceable but basic solution.  Sadly, the backing architecture used by the QuincyKit server is not well designed and scales very poorly with the number of crash reports in the system.  Once you have around 5,000 you&#8217;ll notice the server slowing down significantly, and go much beyond 10,000 that and the system grinds to an unusable halt.  With a day or so of database and code refactoring, these scalability issues can be resolved, creating a system performant enough to track millions of error logs or more.  But that&#8217;s a subject for another time.  The point for today is that despite its basic level of functionality and flawed server architecture, there are a few key areas in which QuincyKit blows the default implementation of Jira Mobile Connect out of the water:</p>
<ol>
<li>Automatic grouping of crash reports.</li>
<li>Automatic symbolication of error logs.</li>
</ol>
<p>If you have any experience whatsoever with supporting multiple high-volume iOS applications you will instantly realize that these are features that you want.  They might even be features that you want more than annotated screenshots, audio feedback, user chat, or seamless integration with Jira and all the awesomeness that Jira brings.  In short, Jira Mobile Connect&#8217;s lack of support for these two key features may cause serious developers to pass it over in favor of other solutions.  </p>
<p>Without grouping every single crash will be creating a new ticket in Jira that you need to track and resolve.  Multiple instances of the same crash will have to be manually flagged as duplicates within Jira.  And without symbolication trying to actually map back from an error log to the line of code that caused it is an exercise in futility, or at best, tedium.</p>
<p>In any case, rather than abandon the excellent potential shown by Jira Mobile Connect I decided that instead I would attempt to patch it up and add the missing features myself.  It&#8217;s all open-source code, after-all, and if the tangled mess of PHP that is QuincyKit server can provide these features then they can&#8217;t be that difficult to implement.  Unfortunately I had to change too many files in too many different places to show the code here, but if you want the short version of it I was able to implement both grouping and symbolication, and you&#8217;re welcome to view the complete diffs on bitbucket:</p>
<p><a href="https://bitbucket.org/aroth_iapps/jiraconnect-ios-iapps/compare/..atlassian/jiraconnect-ios" target="_blank">https://bitbucket.org/aroth_iapps/jiraconnect-ios-iapps/compare/..atlassian/jiraconnect-ios</a> (client/native iOS code)<br />
<a href="https://bitbucket.org/aroth_iapps/jiraconnect-jiraplugin-iapps/compare/..atlassian/jiraconnect-jiraplugin" target="_blank">https://bitbucket.org/aroth_iapps/jiraconnect-jiraplugin-iapps/compare/..atlassian/jiraconnect-jiraplugin</a> (server/Java code)</p>
<p>One interesting side-effect of adding groups was that it became possible for multiple client UID&#8217;s to be associated with a single Jira ticket.  This had an important implication for feedback notifications/chat in that the reference implementation allowed only a single UID to be associated with each Jira ticket.  Since the UID is used for determining what notifications/updates to send to the native client, this restricted update notifications to a single user per ticket.  Not too useful if you have a common crash that thousands of users have experienced.  The implementation above extends the data model to allow multiple UID&#8217;s to be stored against a single Jira ticket, allowing each UID to be updated when new feedback is posted by the developer on a ticket.  In essence, implementing grouping also required the implementation of group feedback/chat.</p>
<p>There is one caveat with my server implementation, in that it assumes the existence of the &#8216;<em>symbolicatecrash</em>&#8216; utility on the system&#8217;s runtime <em>PATH</em>.  This means that it will only work if your Jira server is hosted on a Mac, with the proper XCode developer tools installed (and with your application&#8217;s .app and .dSYM files copied to the local filesystem).  This is of course a requirement regardless if you want automatic symbolication to work on any sort of system; somewhere there needs to be a Mac with &#8216;<em>symbolicatecrash</em>&#8216; available.  In any case, it is a fairly simple matter to either turn this off or otherwise make it more intelligent, if your Jira server is incapable of running &#8216;<em>symbolicatecrash</em>&#8216;.</p>
<p>Also note that the native iOS code has been restructured to build a universal iOS framework as opposed to an architecture-specific static library.  This is done using Karl Stenerud&#8217;s excellent <a href="https://github.com/kstenerud/iOS-Universal-Framework" target="_blank">XCode4 Project Template</a>.  You will need to install this template in order to actually build the modified code.  Or you can just refactor it back to build a static library again, but why would you want to do that?</p>
<p>When using the iOS framework, be aware that you will need to set the &#8216;-<em>all_load</em>&#8216; linker flag and also include all the images and nibs in the framework&#8217;s &#8216;<em>/Resources</em>&#8216; folder as part of your build.  You will probably also want to include the &#8216;<em>JMCLocalizable.strings</em>&#8216; file in the same folder as well, to provide proper text and labels on Jira Mobile Connect&#8217;s UI elements.</p>
<p>Regardless, if you do a lot of iOS development and are already using Jira (like you should be) then I encourage you to check this out.  This is the Jira Mobile Connect plugin, now with automatic grouping and error log symbolication.</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/788/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>[Objective-C + Cocoa] iPhone Screen Capture Revisited</title>
		<link>https://codethink.no-ip.org/archives/673</link>
		<comments>https://codethink.no-ip.org/archives/673#comments</comments>
		<pubDate>Wed, 15 Jun 2011 13:36:12 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[coding]]></category>
		<category><![CDATA[objective-c]]></category>
		<category><![CDATA[hack]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[recording]]></category>
		<category><![CDATA[video]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=673</guid>
		<description><![CDATA[Awhile back I posted a handful of simple iOS utilities. Among them was a basic ScreenCaptureView implementation that would periodically render the contents of its subview(s) into a UIImage that was exposed as a publicly accessible property. This provides the &#8230; <a href="https://codethink.no-ip.org/archives/673">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Awhile back I posted a handful of simple <a href="http://codethink.no-ip.org/wordpress/archives/541">iOS utilities</a>.  Among them was a basic <em>ScreenCaptureView</em> implementation that would periodically render the contents of its subview(s) into a <em>UIImage</em> that was exposed as a publicly accessible property.  This provides the ability to quickly and easily take a snapshot of your running application, or any arbitrary component within it.  And while not superbly impressive (the iPhone has a built-in screenshot feature, after all), I noted that the control theoretically allowed for captured frames to be sent off to an <em>AVCaptureSession</em> in order to record live video of a running application.  </p>
<p>Recently I returned to this bit of code, and the ability to record live video of an application is theoretical no longer.  To get straight to the point, here is the revised code:</p>
<pre class="brush: cpp; title: ; notranslate">//
//ScreenCaptureView.h
//
#import &lt;UIKit/UIKit.h&gt;
#import &lt;AVFoundation/AVFoundation.h&gt;

/**
 * Delegate protocol.  Implement this if you want to receive a notification when the
 * view completes a recording.
 *
 * When a recording is completed, the ScreenCaptureView will notify the delegate, passing
 * it the path to the created recording file if the recording was successful, or a value
 * of nil if the recording failed/could not be saved.
 */
@protocol ScreenCaptureViewDelegate &lt;NSObject&gt;
- (void) recordingFinished:(NSString*)outputPathOrNil;
@end


/**
 * ScreenCaptureView, a UIView subclass that periodically samples its current display
 * and stores it as a UIImage available through the 'currentScreen' property.  The
 * sample/update rate can be configured (within reason) by setting the 'frameRate'
 * property.
 *
 * This class can also be used to record real-time video of its subviews, using the
 * 'startRecording' and 'stopRecording' methods.  A new recording will overwrite any
 * previously made recording file, so if you want to create multiple recordings per
 * session (or across multiple sessions) then it is your responsibility to copy/back-up
 * the recording output file after each session.
 *
 * To use this class, you must link against the following frameworks:
 *
 *  - AssetsLibrary
 *  - AVFoundation
 *  - CoreGraphics
 *  - CoreMedia
 *  - CoreVideo
 *  - QuartzCore
 *
 */

@interface ScreenCaptureView : UIView {
   //video writing
   AVAssetWriter *videoWriter;
   AVAssetWriterInput *videoWriterInput;
   AVAssetWriterInputPixelBufferAdaptor *avAdaptor;

   //recording state
   BOOL _recording;
   NSDate* startedAt;
   void* bitmapData;
}

//for recording video
- (bool) startRecording;
- (void) stopRecording;

//for accessing the current screen and adjusting the capture rate, etc.
@property(retain) UIImage* currentScreen;
@property(assign) float frameRate;
@property(nonatomic, assign) id&lt;ScreenCaptureViewDelegate&gt; delegate;

@end



//
//ScreenCaptureView.m
//
#import &quot;ScreenCaptureView.h&quot;
#import &lt;QuartzCore/QuartzCore.h&gt;
#import &lt;MobileCoreServices/UTCoreTypes.h&gt;
#import &lt;AssetsLibrary/AssetsLibrary.h&gt;

@interface ScreenCaptureView(Private)
- (void) writeVideoFrameAtTime:(CMTime)time;
@end


@implementation ScreenCaptureView

@synthesize currentScreen, frameRate, delegate;

- (void) initialize {
   // Initialization code
   self.clearsContextBeforeDrawing = YES;
   self.currentScreen = nil;
   self.frameRate = 10.0f;     //10 frames per seconds
   _recording = false;
   videoWriter = nil;
   videoWriterInput = nil;
   avAdaptor = nil;
   startedAt = nil;
   bitmapData = NULL;
}

- (id) initWithCoder:(NSCoder *)aDecoder {
   self = [super initWithCoder:aDecoder];
   if (self) {
       [self initialize];
   }
   return self;
}

- (id) init {
   self = [super init];
   if (self) {
       [self initialize];
   }
   return self;
}

- (id)initWithFrame:(CGRect)frame {
   self = [super initWithFrame:frame];
   if (self) {
       [self initialize];
   }
   return self;
}

- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
   CGContextRef    context = NULL;
   CGColorSpaceRef colorSpace;
   int             bitmapByteCount;
   int             bitmapBytesPerRow;

   bitmapBytesPerRow   = (size.width * 4);
   bitmapByteCount     = (bitmapBytesPerRow * size.height);
   colorSpace = CGColorSpaceCreateDeviceRGB();
   if (bitmapData != NULL) {
       free(bitmapData);
   }
   bitmapData = malloc( bitmapByteCount );
   if (bitmapData == NULL) {
       fprintf (stderr, &quot;Memory not allocated!&quot;);
       return NULL;
   }

   context = CGBitmapContextCreate (bitmapData,
                                    size.width,
                                    size.height,
                                    8,      // bits per component
                                    bitmapBytesPerRow,
                                    colorSpace,
                                    kCGImageAlphaNoneSkipFirst);

   CGContextSetAllowsAntialiasing(context,NO);
   if (context== NULL) {
       free (bitmapData);
       fprintf (stderr, &quot;Context not created!&quot;);
       return NULL;
   }
   CGColorSpaceRelease( colorSpace );

   return context;
}


//static int frameCount = 0;            //debugging
- (void) drawRect:(CGRect)rect {
   NSDate* start = [NSDate date];
   CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
   
   //not sure why this is necessary...image renders upside-down and mirrored
   CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
   CGContextConcatCTM(context, flipVertical);

   [self.layer renderInContext:context];
   
   CGImageRef cgImage = CGBitmapContextCreateImage(context);
   UIImage* background = [UIImage imageWithCGImage: cgImage];
   CGImageRelease(cgImage);
   
   self.currentScreen = background;

   //debugging
   //if (frameCount &lt; 40) {
   //      NSString* filename = [NSString stringWithFormat:@&quot;Documents/frame_%d.png&quot;, frameCount];
   //      NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
   //      [UIImagePNGRepresentation(self.currentScreen) writeToFile: pngPath atomically: YES];
   //      frameCount++;
   //}

   //NOTE:  to record a scrollview while it is scrolling you need to implement your UIScrollViewDelegate such that it calls
   //       'setNeedsDisplay' on the ScreenCaptureView.
   if (_recording) {
       float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
       [self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
   }

   float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
   float delayRemaining = (1.0 / self.frameRate) - processingSeconds;

   CGContextRelease(context);

   //redraw at the specified framerate
   [self performSelector:@selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining &gt; 0.0 ? delayRemaining : 0.01];   
}

- (void) cleanupWriter {
   [avAdaptor release];
   avAdaptor = nil;
   
   [videoWriterInput release];
   videoWriterInput = nil;
   
   [videoWriter release];
   videoWriter = nil;
   
   [startedAt release];
   startedAt = nil;

   if (bitmapData != NULL) {
       free(bitmapData);
       bitmapData = NULL;
   }
}

- (void)dealloc {
   [self cleanupWriter];
   [super dealloc];
}

- (NSURL*) tempFileURL {
   NSString* outputPath = [[NSString alloc] initWithFormat:@&quot;%@/%@&quot;, [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], @&quot;output.mp4&quot;];
   NSURL* outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
   NSFileManager* fileManager = [NSFileManager defaultManager];
   if ([fileManager fileExistsAtPath:outputPath]) {
       NSError* error;
       if ([fileManager removeItemAtPath:outputPath error:&amp;error] == NO) {
           NSLog(@&quot;Could not delete old recording file at path:  %@&quot;, outputPath);
       }
   }

   [outputPath release];
   return [outputURL autorelease];
}

-(BOOL) setUpWriter {
   NSError* error = nil;
   videoWriter = [[AVAssetWriter alloc] initWithURL:[self tempFileURL] fileType:AVFileTypeQuickTimeMovie error:&amp;error];
   NSParameterAssert(videoWriter);

   //Configure video
   NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                          [NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
                                          nil ];

   NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                  AVVideoCodecH264, AVVideoCodecKey,
                                  [NSNumber numberWithInt:self.frame.size.width], AVVideoWidthKey,
                                  [NSNumber numberWithInt:self.frame.size.height], AVVideoHeightKey,
                                  videoCompressionProps, AVVideoCompressionPropertiesKey,
                                  nil];

   videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain];

   NSParameterAssert(videoWriterInput);
   videoWriterInput.expectsMediaDataInRealTime = YES;
   NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: 
                                     [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

   avAdaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes] retain];

   //add input
   [videoWriter addInput:videoWriterInput];
   [videoWriter startWriting];
   [videoWriter startSessionAtSourceTime:CMTimeMake(0, 1000)];

   return YES;
}



- (void) completeRecordingSession {
   NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];

   [videoWriterInput markAsFinished];
   
   // Wait for the video
   int status = videoWriter.status;
   while (status == AVAssetWriterStatusUnknown) {
       NSLog(@&quot;Waiting...&quot;);
       [NSThread sleepForTimeInterval:0.5f];
       status = videoWriter.status;
   }

   @synchronized(self) {
       BOOL success = [videoWriter finishWriting];
       if (!success) {
           NSLog(@&quot;finishWriting returned NO&quot;);
       }

       [self cleanupWriter];

       id delegateObj = self.delegate;
       NSString *outputPath = [[NSString alloc] initWithFormat:@&quot;%@/%@&quot;, [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], @&quot;output.mp4&quot;];
       NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];

       NSLog(@&quot;Completed recording, file is stored at:  %@&quot;, outputURL);
       if ([delegateObj respondsToSelector:@selector(recordingFinished:)]) {
           [delegateObj performSelectorOnMainThread:@selector(recordingFinished:) withObject:(success ? outputURL : nil) waitUntilDone:YES];
       }

       [outputPath release];
       [outputURL release];
   }

   [pool drain];
}



- (bool) startRecording {
   bool result = NO;
   @synchronized(self) {
       if (! _recording) {
           result = [self setUpWriter];
           startedAt = [[NSDate date] retain];
           _recording = true;
       }
   }

   return result;
}

- (void) stopRecording {
   @synchronized(self) {
       if (_recording) {
           _recording = false;
           [self completeRecordingSession];
       }
   }
}

-(void) writeVideoFrameAtTime:(CMTime)time {
   if (![videoWriterInput isReadyForMoreMediaData]) {
       NSLog(@&quot;Not ready for video data&quot;);
   }
   else {
       @synchronized (self) {
           UIImage* newFrame = [self.currentScreen retain];
           CVPixelBufferRef pixelBuffer = NULL;
           CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
           CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));

           int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &amp;pixelBuffer);
           if(status != 0){
               //could not get a buffer from the pool
               NSLog(@&quot;Error creating pixel buffer:  status=%d&quot;, status);
           }
                       // set image data into pixel buffer
           CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
           uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
           CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);  //XXX:  will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data

           if(status == 0){
               BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
               if (!success)
                   NSLog(@&quot;Warning:  Unable to write buffer to video&quot;);
           }

           //clean up
           [newFrame release];
           CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
           CVPixelBufferRelease( pixelBuffer );        
           CFRelease(image);
           CGImageRelease(cgImage);
       }

   }

}

@end</pre>
<p>This class will let you record high-quality video of any other view in your application.  To use it, simply set it up as the superview of the <em>UIView(s)</em> that you want to record, add a reference to it in your corresponding <em>UIViewController</em> (using Interface Builder or whatever your preferred method happens to be), and then call &#8216;<em>startRecording</em>&#8216; when you are ready to start recording video.  When you&#8217;ve recorded enough, call &#8216;<em>stopRecording</em>&#8216; to complete the process.  You will get a nice .mp4 file stored under your application&#8217;s &#8216;Documents&#8217; directory that you can copy off or do whatever else you want with.</p>
<p>Note that if you want to record a <em>UIScrollView</em> while it is scrolling, you will need to implement your <em>UIScrollViewDelegate</em> such that it calls &#8216;<em>setNeedsDisplay</em>&#8216; on the <em>ScreenCaptureView</em> while the scroll-view is scrolling.  For instance:</p>
<pre class="brush: cpp; title: ; notranslate">- (void) scrollViewDidScroll: (UIScrollView*)scrollView {
       [captureView setNeedsDisplay];
}</pre>
<p>I haven&#8217;t tested this code on a physical device yet, but there&#8217;s no reason why it should not work on any device that includes H.264 video codec support (iPhone 3GS and later).  However, given the amount of drawing that it does, it&#8217;s safe to say that the more horsepower behind it, the better.  </p>
<p>Here is a rather unimpressive 30-second recording of a <em>UITableView</em> that I created using this class (if your browser doesn&#8217;t support HTML5, use the link below):<br />
<video width="320" height="460" controls><source src="http://codethink.no-ip.org/wordpress/wp-content/uploads/2011/06/output.mp4"></video><br />
<a href='http://codethink.no-ip.org/wordpress/wp-content/uploads/2011/06/output.mp4' target="_blank">Example iPhone Recording</a></p>
<p>Lastly, I haven&#8217;t tested this class with any OpenGL-based subviews, so I can&#8217;t say if it will work in that case.  If you try it in this configuration, please feel free to reply with your results.</p>
<p><strong>Update</strong></p>
<p>For anyone looking for a working example, you can download this <a href="http://codethink.no-ip.org/ScreenCaptureViewTest.zip">sample project</a>.  This project simply creates a 30-second recording of a &#8216;<em>UITableView</em>&#8216;.</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/673/feed</wfw:commentRss>
		<slash:comments>156</slash:comments>
<enclosure url="http://codethink.no-ip.org/wordpress/wp-content/uploads/2011/06/output.mp4" length="700174" type="video/mp4" />
		</item>
		<item>
		<title>[Objective-C + Cocoa] Runtime Performance Profiling</title>
		<link>https://codethink.no-ip.org/archives/563</link>
		<comments>https://codethink.no-ip.org/archives/563#comments</comments>
		<pubDate>Fri, 01 Apr 2011 07:25:47 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[coding]]></category>
		<category><![CDATA[objective-c]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[objc]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=563</guid>
		<description><![CDATA[The iPhone SDK and XCode provide some very useful tools for application profiling, particularly with respect to tracking memory consumption and pinpointing memory leaks and other similar issues, but one thing which I have found lacking in the default toolset &#8230; <a href="https://codethink.no-ip.org/archives/563">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>The iPhone SDK and XCode provide some very useful tools for application profiling, particularly with respect to tracking memory consumption and pinpointing memory leaks and other similar issues, but one thing which I have found lacking in the default toolset is the ability to track the execution time of this-or-that specific bit of code, be it a function call, a loop within a function, or some arbitrary series of API calls made in quick succession.  </p>
<p>Typically I would deal with such cases by adding some one-off code around the section that I was interested in profiling to measure and log its execution time.  But this approach quickly becomes unwieldy when there are several different places that you are interested in profiling, and particularly when it comes time to track down and remove all of these bits of code so that they stop spamming the log with timing data that is no longer needed.  </p>
<p>In any case, here is my solution to this minor annoyance, patterned after a similar utility I implemented in ActionScript some time ago:</p>
<pre class="brush: cpp; title: ; notranslate">//PerformanceTimer.h
#import &lt;Foundation/Foundation.h&gt;


@interface PerformanceTimer : NSObject {
    NSMutableArray* startTimes;
    NSMutableArray* names;
}

+ (void) startTimerWithName: (NSString*) name;
+ (clock_t) stopLastTimer;
+ (clock_t) stopTimerWithName: (NSString*) name;

@end


//PerformanceTimer.m
#import &quot;PerformanceTimer.h&quot;

static PerformanceTimer* timerInstance = nil;

@implementation PerformanceTimer

- (id) init {
    self = [super init];
    if (self) {
        startTimes = [[NSMutableArray alloc] initWithCapacity:16];
        names = [[NSMutableArray alloc] initWithCapacity:16];
    }
    
    return self;
}

- (void) dealloc {
    [startTimes release];
    startTimes = nil;
    
    [names release];
    names = nil;
    
    [super dealloc];
}

- (void) startTimer: (NSString*) name {
    @synchronized(startTimes) {
        clock_t start = clock();
        NSNumber* startNum = [NSNumber numberWithUnsignedLong:start];
        [startTimes addObject:startNum];
        [names addObject:name];
    }
}

- (clock_t) stopTimerNamed: (NSString*) name {
    @synchronized(startTimes) {
        int index = 0;
        for (NSString* timerName in names) {
            if ([timerName isEqualToString:name]) {
                break;
            }
            index++;
        }
        if (index &gt;= [names count]) {
            //couldn't find it
            return 0;
        }
        clock_t start = [[startTimes objectAtIndex:index] unsignedLongValue];
        clock_t end = (clock() - start) / (CLOCKS_PER_SEC / 1000.0);
        
        #ifdef DEBUG
        //if debugging, always print, otherwise let the caller decide what to do
        NSLog(@&quot;PerformanceTimer:  Total execution time for task named '%@':  %lu ms&quot;, name, end);
        #endif
        
        [startTimes removeObjectAtIndex:index];
        [names removeObjectAtIndex:index];
        
        return end;
    }
}

- (clock_t) stopTimer {
    @synchronized(startTimes) {
        if ([names count] &lt; 1) {
            //no timer is running
            return 0;
        }
        
        NSString* name = [names objectAtIndex:[names count] - 1];
        return [self stopTimerNamed:name];
    }
}

+ (void) startTimerWithName: (NSString*) name {
    if (! timerInstance) {
        timerInstance = [[PerformanceTimer alloc] init];
    }
    [timerInstance startTimer:name];
}

+ (clock_t) stopLastTimer {
    if (! timerInstance) {
        timerInstance = [[PerformanceTimer alloc] init];
    }
    return [timerInstance stopTimer];
}

+ (clock_t) stopTimerWithName: (NSString*) name {
    if (! timerInstance) {
        timerInstance = [[PerformanceTimer alloc] init];
    }
    return [timerInstance stopTimerNamed:name];
}

@end</pre>
<p>This adds a static utility class that can be used to spawn multiple independent named timers.  It is used like:</p>
<pre class="brush: cpp; title: ; notranslate">[PerformanceTimer startTimer:@&quot;myTimerName&quot;];
//do some things that you want to time
clock_t elapsedTimeMillis = [PerformanceTimer stopTimerWithName: @&quot;myTimerName&quot;];
//print or log the elapsed time</pre>
<p>The default behavior of this class when run in debug mode is to print out the elapsed time automatically when &#8216;stopTimer&#8217; is called, so it&#8217;s not generally necessary to capture or inspect the value returned from &#8216;stopTimer&#8217; unless you are either running in release mode or if you want to do something more involved than simply logging the timing data to the console.</p>
<p>Note that although a &#8216;stopLastTimer&#8217; method is provided for convenience, its use is generally not recommended unless you are absolutely sure that nobody else has started a timer somewhere else in the code.  Otherwise you can end up inadvertently stopping the wrong timer.  Inside of a small, simple, single-threaded application there is little to worry about.  But for any more complex/multi-threaded projects it is much safer to always specify a timer by name.</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/563/feed</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>[Objective-C + Cocoa]  UIScrollView and contentSize</title>
		<link>https://codethink.no-ip.org/archives/357</link>
		<comments>https://codethink.no-ip.org/archives/357#comments</comments>
		<pubDate>Sat, 12 Feb 2011 12:31:17 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[coding]]></category>
		<category><![CDATA[objective-c]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[objc]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=357</guid>
		<description><![CDATA[So here&#8217;s a simple one. For whatever reason, a UIScrollView instance only behaves correctly if you programmatically set its contentSize when you use it. This is fairly silly because in most cases the contentSize is simply the total size of &#8230; <a href="https://codethink.no-ip.org/archives/357">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>So here&#8217;s a simple one.  For whatever reason, a UIScrollView instance only behaves correctly if you programmatically set its contentSize when you use it.  This is fairly silly because in most cases the contentSize is simply the total size of the UIScrollView&#8217;s subview(s).  Why the UIScrollView class doesn&#8217;t provide at least the option of automatically determining its own contentSize based upon its current subviews is beyond me, but here is some simple code to approximate this behavior:</p>
<pre class="brush: cpp; title: ; notranslate">@interface UIScrollView(auto_size)
- (void) adjustHeightForCurrentSubviews: (int) verticalPadding;
- (void) adjustWidthForCurrentSubviews: (int) horizontalPadding;
- (void) adjustWidth: (bool) changeWidth andHeight: (bool) changeHeight withHorizontalPadding: (int) horizontalPadding andVerticalPadding: (int) verticalPadding;
@end

@implementation UIScrollView(auto_size) 
- (void) adjustWidth: (bool) changeWidth andHeight: (bool) changeHeight withHorizontalPadding: (int) horizontalPadding andVerticalPadding: (int) verticalPadding {
    float contentWidth = horizontalPadding;
    float contentHeight = verticalPadding;
    for (UIView* subview in self.subviews) {
        [subview sizeToFit];
        contentWidth += subview.frame.size.width;
        contentHeight += subview.frame.size.height;
    }
    
    contentWidth = changeWidth ? contentWidth : self.superview.frame.size.width;
    contentHeight = changeHeight ? contentHeight : self.superview.frame.size.height;
    
    NSLog(@&quot;Adjusting ScrollView size to %fx%f, verticalPadding=%d, horizontalPadding=%d&quot;, contentWidth, contentHeight, verticalPadding, horizontalPadding);
    self.contentSize = CGSizeMake(contentWidth, contentHeight);
}

- (void) adjustHeightForCurrentSubviews: (int) verticalPadding {
    [self adjustWidth:NO andHeight:YES withHorizontalPadding:0 andVerticalPadding:verticalPadding];
}

- (void) adjustWidthForCurrentSubviews: (int) horizontalPadding {
    [self adjustWidth:YES andHeight:NO withHorizontalPadding:horizontalPadding andVerticalPadding:0];
}
@end</pre>
<p>This code allows a UIScrollView to internally determine its contentSize based upon its current subviews; all you have to do is call one of the three interface methods at an appropriate time (like from within your parent view-controller&#8217;s &#8216;<em>viewDidLoad:</em>&#8216; implementation).  Note that while auto-sizing based upon both width and height is supported, you will only get a correct result for width if all of the UIScrollView&#8217;s subviews span the entire height of the view, and you will only get a correct result for height if all of your subviews span the entire width of the view.  For instance, if you add a thumbnail image to the UIScrollView and then drag a UILabel next to it then both of them will count towards the computed height even though they are logically on the same row.  </p>
<p>You can work around this limitation either by using the &#8216;&#8230;padding&#8217; parameters to adjust the final contentSize, or by adding a UIView that spans the width of the UIScrollView and placing both your thumbnail image and UILabel as subviews of that UIView instead of the UIScrollView.  The latter option of using a nested UIView to contain the content of the row is a better/more maintainable way to build an interface anyways (and also building a UI in Android basically requires you to follow this pattern, so best to get used to it).  But I did try various approaches to solve this problem automatically in the code, such as keeping track min and max x/y coordinates of every subview in the UIScrollView, but this gave inconsistent results between the initial time the view was displayed and subsequent times.</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/357/feed</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>[Cocoa + iPhone] UITableViewCell:  It&#8217;s Broken!</title>
		<link>https://codethink.no-ip.org/archives/367</link>
		<comments>https://codethink.no-ip.org/archives/367#comments</comments>
		<pubDate>Thu, 10 Feb 2011 13:27:59 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[coding]]></category>
		<category><![CDATA[objective-c]]></category>
		<category><![CDATA[hack]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[objc]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=367</guid>
		<description><![CDATA[I present for your consideration the following screenshot: It shows a basic table-view, in which each cell has been assigned the same image (using its built-in &#8216;imageView&#8216; property). The source image is 20 pixels square, and the imageView&#8217;s &#8216;contentMode&#8216; property &#8230; <a href="https://codethink.no-ip.org/archives/367">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I present for your consideration the following screenshot:</p>
<p><a href="http://codethink.no-ip.org/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-12.31.37-AM.png" rel="lightbox[367]"><img src="http://codethink.no-ip.org/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-12.31.37-AM.png" alt="UITableViewCell is broken!" title="UITableViewCell is broken!" width="360" height="711" class="aligncenter size-full wp-image-371" /></a></p>
<p>It shows a basic table-view, in which each cell has been assigned the same image (using its built-in &#8216;<em>imageView</em>&#8216; property).  The source image is 20 pixels square, and the imageView&#8217;s &#8216;<em>contentMode</em>&#8216; property has not been changed (not that changing it makes any difference). The image for each row is also being scaled to 50% and rendered at the orientation stated in the cell text.  The code for the table controller is as follows:</p>
<pre class="brush: cpp; title: ; notranslate">#import &quot;UITableViewTestViewController.h&quot;

static NSString* rowNames[8] = {@&quot;UIImageOrientationUp&quot;, @&quot;UIImageOrientationDown&quot;, @&quot;UIImageOrientationLeft&quot;, @&quot;UIImageOrientationRight&quot;, 
                                @&quot;UIImageOrientationUpMirrored&quot;, @&quot;UIImageOrientationDownMirrored&quot;, @&quot;UIImageOrientationLeftMirrored&quot;, 
                                @&quot;UIImageOrientationRightMirrored&quot;};

#define IMAGE_NAME @&quot;testImage.png&quot;

@implementation UITableViewTestViewController

- (void)dealloc {
    [super dealloc];
}

- (void)didReceiveMemoryWarning {
    // Releases the view if it doesn't have a superview.
    [super didReceiveMemoryWarning];
}

#pragma mark - View lifecycle
- (int) tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
    return 8;  //number of elements in the enumeration
}

- (int) numberOfSectionsInTableView:(UITableView *)tableView {
    return 1;
}

- (UITableViewCell*) tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
    static NSString* cellIdentifier = @&quot;TestCell&quot;;
    
    //return a basic cell with the icon in it and some text
    UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@&quot;StationCell&quot;];
    if (cell == nil) {
        //init cell
        cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:cellIdentifier] autorelease];
    }
    
    cell.accessoryType = UITableViewCellAccessoryNone;
    cell.textLabel.text = rowNames[indexPath.row];          //enum starts from 0, so indexPath.row matches the orientation that we are going to apply
    cell.textLabel.font = [cell.textLabel.font fontWithSize:12.0];
    cell.textLabel.textColor = [UIColor darkGrayColor];
    cell.imageView.image =  [UIImage imageWithCGImage:[UIImage imageNamed:IMAGE_NAME].CGImage scale:0.5 orientation:indexPath.row];  //the scale operation will be ignored for UIImageOrientationUp; because something is broken
    
    return cell;
}

- (void) tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath {
    //it makes no difference if we set the image here
    //cell.imageView.image =  [UIImage imageWithCGImage:[UIImage imageNamed:IMAGE_NAME].CGImage scale:0.5 orientation:indexPath.row];
}
@end</pre>
<p>It&#8217;s not doing anything all that special, but as you can see in the screenshot the image in the first cell is rendered differently than all the others.  More specifically, it is being stretched to the full size of its container so that it just looks kind of sad, and no amount of programmatic scale operations will fix it.  </p>
<p>This can be one of the most maddening aspects about working with table-cells and images.  If you want an image that is slightly smaller than its container in the table-cell, or that is centered away from the top/side, then the only consistent way to do so is to create a custom table-cell.  And while it is not difficult to create a custom table-cell that implements the desired behavior, it needlessly clutters the source-tree with code that replicates functionality that Apple is supposed to be providing out of the box.</p>
<p>The problem, as exposed by this example code, is that when an image is scaled using UIImageOrientationUp (which is what most developers would use, given that they generally store their images in the orientation they want them displayed at) the UITableViewCell completely ignores the scaling operation.  I can only speculate as to the reason for this odd behavior, because at the very least I would expect the output to be the same no matter what UIImageOrientation is used (i.e. I would think that scaling would either consistently not work or consistently work, but this is manifestly not the case).</p>
<p>In any case, this behavior is very clearly a bug, and a particularly inconvenient one at that.  But it does expose a potential workaround that generates less source-clutter than creating a custom table-cell implementation every time you want to have cell images that actually work.  Just store your images upside-down (or preprocess them so that they are upside-down prior to adding to the table) and then invert them back to the proper orientation when you scale them to the size you want for your table.  </p>
<p>It&#8217;s dodgy as all hell to do it that way, but still arguably better than reimplementing functionality that Apple is supposed to be providing out of the box.</p>
<p>Project source code is available here:  <a href="http://codethink.no-ip.org/UITableViewTest.zip">http://codethink.no-ip.org/UITableViewTest.zip</a></p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/367/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>[Cocoa + iPhone] Unraveling Apple&#8217;s Pagecurl</title>
		<link>https://codethink.no-ip.org/archives/320</link>
		<comments>https://codethink.no-ip.org/archives/320#comments</comments>
		<pubDate>Wed, 09 Feb 2011 13:17:04 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[coding]]></category>
		<category><![CDATA[objective-c]]></category>
		<category><![CDATA[hack]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[objc]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=320</guid>
		<description><![CDATA[First off, I encourage anyone that&#8217;s unfamiliar of this topic to read through this short but very sweet blog post on the subject (and to take a quick look at his sample code). We&#8217;ll be picking up where Steven left &#8230; <a href="https://codethink.no-ip.org/archives/320">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>First off, I encourage anyone that&#8217;s unfamiliar of this topic to read through this <a href="http://blog.steventroughtonsmith.com/2010/02/apples-ibooks-dynamic-page-curl.html" target="_blank">short but very sweet blog post</a> on the subject (and to take a quick look at his sample code).  We&#8217;ll be picking up where Steven left off.</p>
<p>In any case, to summarize the current situation; there exists a private and undocumented API in the iPhone SDK which Apple uses to great effect in their iBook application.  The way to interface with this private API has been discovered and even <a href="http://www.iphonedevwiki.net/index.php?title=CAFilter" target="_blank">fairly well documented</a>.  Using the private API is pretty straightforward but for one small problem:  if you use the private API in your application then Apple will reject your app.  For whatever non-specified reason (probably to keep potential iBook competitors in check), Apple does not want to open up their private API to developers or to play nice with developers who bend the rules and use the private API.  </p>
<p>So our goal is clear.  If Apple isn&#8217;t going to play nice and open the API up to developers, then perhaps we can do some digging to figure out how Apple&#8217;s implementation actually works and create our own implementation that does the same thing.  It&#8217;s a pretty standard exercise in reverse-engineering, really.  The core of the private page-curl API is used like so:</p>
<pre class="brush: cpp; title: ; notranslate">		filter = [[CAFilter filterWithType:kCAFilterPageCurl] retain];
		[filter setDefaults];
		[filter setValue:[NSNumber numberWithFloat:((NSUInteger)fingerDelta)/100.0] forKey:@&quot;inputTime&quot;];
		
		CGFloat _angleRad = angleBetweenCGPoints(currentPos, lastPos);
		CGFloat _angle = _angleRad*180/M_PI ; // I'm far more comfortable with using degrees <img src="https://codethink.no-ip.org/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" />
					
		if (_angle &lt; 180 &amp;&amp; _angle &gt; 120) {// here I've limited the results to the right-hand side of the paper. I'm sure there's a better way to do this
			if (fingerVector.y &gt; 0)
				[filter setValue:[NSNumber numberWithFloat:_angleRad] forKey:@&quot;inputAngle&quot;];
			else
				[filter setValue:[NSNumber numberWithFloat:-_angleRad] forKey:@&quot;inputAngle&quot;];

			_internalView.layer.filters = [NSArray arrayWithObject:filter];
		}</pre>
<p>This is an excerpt straight out of Steven Troughton-Smith&#8217;s example.  The example includes additional code related to tracking touch positions and interpolating the angle and distance between them, but this is really the core of the private API right here.  All of the heavy-lifting is handled by the CAFilter class (private), which has a type of &#8216;<em>kCAFilterPageCurl</em>&#8216; (private constant, just the string <em>@&#8221;pageCurl&#8221;</em>, other filter types also exist), and which takes just a small number of input parameters (&#8216;<em>inputTime</em>&#8216; and &#8216;<em>inputAngle</em>&#8216;) and then works its magic behind the scenes.</p>
<p>So given that CAFilter seems to be doing pretty much all the work, it would follow that by constructing our own class that exposes the same interface as CAFilter we can supplant the private-API class with one of our own making (ah, the joys of reflection and weak-typing), thus interfacing with the underlying platform without breaking any of the rules.  But what exactly is a CAFilter?  Is it as onerous as a UIView with its hundreds of methods and properties?  Does it extend another obscure private-API class that will also need to be reverse-engineered?  Well thanks to the &#8216;<em><a href="http://codethink.no-ip.org/wordpress/archives/236">printObject:toDepth:</a></em>&#8216; routine discussed in a previous post we can see that a CAFilter is exactly:</p>
<pre class="brush: cpp; title: ; notranslate">@interface CAFilter : NSObject {
	unsigned int _type;
	NSString* _name;
	unsigned int _flags;
	void* _attr;
	void* _cache;
}

//Constructors
- (id) initWithType:  (NSString*) arg0;
- (id) initWithName:  (NSString*) arg0;

//NSCoding
- (NSObject*) initWithCoder:  (NSCoder*) arg0;
- (void) encodeWithCoder:  (NSCoder*) arg0;

//NSKeyValueCoding
- (void) setValue: (id) arg0 forKey: (NSString*) arg1;
- (id) valueForKey:  (NSString*) arg0;

//NSCopying and NSMutableCopying
- (NSObject*) mutableCopyWithZone:  (NSZone*) arg0;
- (NSObject*) copyWithZone:  (NSZone*) arg0;

//interface methods
- (void) setDefaults;
- (bool) isEnabled;
- (struct UnknownAtomic*) CA_copyRenderValue;

//garbage collection (doesn't need to be declared here)
- (void) dealloc;

//property accessors (don't need to be declared here)
- (bool) enabled;
- (void) setEnabled:  (bool) arg0;
- (bool) cachesInputImage;
- (void) setCachesInputImage:  (bool) arg0;
- (NSString*) name;
- (void) setName:  (NSString*) arg0;
- (NSObject*) type;

//properties
@property(nonatomic, readonly) NSString* type;
@property(nonatomic, retain) NSString* name;
@property(nonatomic) bool enabled;
@property(nonatomic) bool cachesInputImage;

@end</pre>
<p>Nineteen methods and a handful of fields.  Not bad, not bad at all, particularly when many of the methods are simply implementing various publicly-documented protocols such as NSCoding, NSCopying, and NSKeyValueCoding.  As an added bonus, the superclass of CAFilter is NSObject, so the problem has now been reduced to the implementation of a single unknown class (which may still be a Herculean task, but at least now there are clearly-defined boundaries).  </p>
<p>But the above code includes some methods that do not need to be part of the publicly declared interface.  Let&#8217;s clean it up, rename it so that it doesn&#8217;t conflict with the existing private-API class, and add the proper definition of the &#8216;<em>&#8230;Atomic</em>&#8216; struct:</p>
<pre class="brush: cpp; title: ; notranslate">#import &lt;Foundation/Foundation.h&gt;

struct RenderValueResult { 
	int (**x1)(); 
	struct MyAtomic { 
		struct { 
			NSInteger x; 
		} _v; 
	} x2; 
} *_filterResult;

@interface MyCAFilter : NSObject&lt;NSCoding, NSCopying, NSMutableCopying&gt; {
	unsigned int _type;
	NSString* _name;
	unsigned int _flags;
	void* _attr;
	void* _cache;
}

//Constructors
- (id) initWithType:  (NSString*) arg0;
- (id) initWithName:  (NSString*) arg0;

//NSKeyValueCoding
- (void) setValue: (id) arg0 forKey: (NSString*) arg1;
- (id) valueForKey:  (NSString*) arg0;

//interface methods
- (void) setDefaults;
- (bool) isEnabled;
- (struct RenderValueResult*) CA_copyRenderValue;

//properties
@property(nonatomic, readonly) NSString* type;
@property(nonatomic, retain) NSString* name;
@property(nonatomic) bool enabled;
@property(nonatomic) bool cachesInputImage;

@end</pre>
<p>Looking better already.  That &#8216;<em>RenderValueResult</em>&#8216; struct will prove to be a nasty one, but more on that later.</p>
<p>Now that we know the interface, and before we go flying off randomly trying to replicate functionality that we still don&#8217;t fully understand, let&#8217;s take a simpler step.  Let&#8217;s create a simple class that exposes the CAFilter interface, wraps an actual CAFilter instance, and logs each method call, parameters, and result, like so:</p>
<pre class="brush: cpp; title: ; notranslate">//MyCAFilter.h (modified to include 'delegate' field)
#import &lt;Foundation/Foundation.h&gt;

struct RenderValueResult {
    int (**x1)();
    struct MyAtomic {
        struct {
            NSInteger x;
        } _v;
    } x2;
} *_renderValueResult;

@class CAFilter;  //private-API

@interface MyCAFilter : NSObject&lt;NSCoding, NSCopying, NSMutableCopying&gt; {
    unsigned int _type;
    NSString* _name;
    unsigned int _flags;
    void* _attr;
    void* _cache;
    
    CAFilter* delegate;  //private-API
}

//Constructors
- (id) initWithType:  (NSString*) arg0;
- (id) initWithName:  (NSString*) arg0;

//NSKeyValueCoding
- (void) setValue: (id) arg0 forKey: (NSString*) arg1;
- (id) valueForKey:  (NSString*) arg0;

//interface methods
- (void) setDefaults;
- (bool) isEnabled;
- (struct RenderValueResult*) CA_copyRenderValue;

//properties
@property(nonatomic, readonly) NSString* type;
@property(nonatomic, retain) NSString* name;
@property(nonatomic) bool enabled;
@property(nonatomic) bool cachesInputImage;

@end

//MyCAFilter.m
#import &quot;MyCAFilter.h&quot;

@implementation MyCAFilter

@dynamic name, cachesInputImage, type, enabled;

- (id) initWithType: (NSString*) theType {
	NSLog(@&quot;initWithType: type='%@'&quot;, theType);
    if ((self = [super init])) {
        delegate = [[CAFilter alloc] initWithType: theType];    //TODO:  remove delegate
    }
	return self;
}

- (id) initWithName: (NSString*) theName {
	NSLog(@&quot;initWithName: name='%@'&quot;, theName);
    if ((self = [super init])) {
        delegate = [[CAFilter alloc] initWithName: theName];    //TODO:  remove delegate
    }
	return self;
}

- (id) initWithCoder: (NSCoder*) coder {
	NSLog(@&quot;initWithCoder: coder=%@&quot;, coder);
	if ((self = [super init])) {
        delegate = [[CAFilter alloc] initWithCoder: coder];     //TODO:  remove delegate
    }
    return self;
}

- (void) setDefaults {
	NSLog(@&quot;setDefaults&quot;);
	[delegate setDefaults];  //TODO:  remove delegate
}

- (void) encodeWithCoder: (NSCoder*) encoder {
	NSLog(@&quot;encodeWithCoder:  coder=%@&quot;, encoder);
	[delegate encodeWithCoder:encoder];  //TODO:  remove delegate
}

- (id) mutableCopyWithZone: (NSZone*) zone {
    id result = [delegate mutableCopyWithZone:zone];   //TODO:  remove delegate
	NSLog(@&quot;mutableCopyWithZone: zone=%@; result=%@&quot;, zone);
	return result;
}

- (id) copyWithZone: (NSZone*) zone {
    id result = [delegate copyWithZone:zone];  //TODO:  remove delegate
	NSLog(@&quot;copyWithZone:  zone=%@; result=%@&quot;, zone, result);
	return result;
}

- (void) setValue: (id) value forKey: (NSString*) key {
	NSLog(@&quot;setValue:  key=%@, value=%@&quot;, key, value);
	[delegate setValue:value forKey:key];	//TODO:  remove delegate
}

- (id) valueForKey:(id) key {
    id result = [delegate valueForKey:key];  //TODO:  remove delegate
	NSLog(@&quot;valueForKey:  key=%@; result=%@&quot;, key, result);
	return result;
}

- (bool) isEnabled {
    bool result = [delegate isEnabled]; //TODO:  remove delegate
	NSLog(@&quot;isEnabled; result=%d&quot;, result);
	return result; 
}

- (void) dealloc {
	NSLog(@&quot;dealloc&quot;);
	[delegate release];		//TODO:  remove delegate
	[super dealloc];
}

- (bool) enabled {
    bool result = [delegate enabled];		//TODO:  remove delegate
	NSLog(@&quot;enabled; result=%d&quot;, result);
	return result;
}
- (void) setEnabled: (bool) val {
	NSLog(@&quot;setEnabled: value=%d&quot;, val);
	[delegate setEnabled:val];		//TODO:  remove delegate
}

- (void) setCachesInputImage: (bool) val {
	NSLog(@&quot;setCachesInputImage: val=%d&quot;, val);
	[delegate setCachesInputImage:val];		//TODO:  remove delegate
}
- (bool) cachesInputImage {
    bool result = [delegate cachesInputImage];		//TODO:  remove delegate
	NSLog(@&quot;cachesInputImage; result=%d&quot;, result);
	return result;
}

- (id) name {
    id result = [delegate name];		//TODO:  remove delegate
	NSLog(@&quot;name; result=%@&quot;, result);
	return result;
}

- (void) setName: (NSString*) name {
	NSLog(@&quot;setName: name='%@'&quot;, name);
	[delegate setName: name];		//TODO:  remove delegate
}

- (NSString*) type {
    NSString* result = [delegate type];		//TODO:  remove delegate
	NSLog(@&quot;type; result=%@&quot;, result);
	return result;
}

- (struct RenderValueResult*) CA_copyRenderValue {
	struct RenderValueResult* result = [delegate CA_copyRenderValue];	//TODO:  remove delegate
    NSLog(@&quot;CA_copyRenderValue; result=0x%08X, result.x1=0x%08X, result.x2=%d&quot;, result, result-&gt;x1, result-&gt;x2);
	return result;
}

@end</pre>
<p>Using this class is a simple matter of editing &#8216;<em>ReadPdfView.m</em>&#8216; (working with Steven&#8217;s example project) to replace both instances of &#8216;<em>[[CAFilter filterWithType:kCAFilterPageCurl] retain];</em>&#8216; with &#8216;<em>[[MyCAFilter alloc] initWithType: @&#8221;pageCurl&#8221;];</em>&#8216;.  Note that it is also now safe to remove the &#8216;<em>@class CAFilter;</em>&#8216; and &#8216;<em>extern NSString *kCAFilterPageCurl;</em>&#8216; lines from this class.</p>
<p>Now obviously this still won&#8217;t fly with Apple, as it continues to use the private-API CAFilter class.  But consider what we&#8217;ve accomplished; we&#8217;ve now inserted our own custom object into the rendering pipeline, and the core-animation framework is none-the-wiser.  If we can now figure out how to get the same results without internally using the CAFilter instance, we will have cracked the page-curl animation.</p>
<p>Moving along, if we run this code through a complete page-curl animation, we see a very simple pattern emerge:</p>
<pre class="brush: plain; title: ; notranslate">2011-02-09 00:59:11.694 PageCurlDemo[5501:207] initWithType: type='pageCurl'
2011-02-09 00:59:11.707 PageCurlDemo[5501:207] setDefaults
2011-02-09 00:59:11.711 PageCurlDemo[5501:207] setValue:  key=inputTime, value=0
2011-02-09 00:59:11.716 PageCurlDemo[5501:207] setValue:  key=inputAngle, value=-3.141593
2011-02-09 00:59:11.734 PageCurlDemo[5501:207] CA_copyRenderValue; result=0x04AF09E0, result.x1=0x00D96448, result.x2=65538
2011-02-09 00:59:11.767 PageCurlDemo[5501:207] valueForKey:  key=inputTime; result=0
2011-02-09 00:59:11.808 PageCurlDemo[5501:207] dealloc</pre>
<p>This sequence of calls is repeated a number of times as the animation runs.  None of the other methods that exist on the object are called.  Every single one of these calls with the exception of &#8216;<em>CA_copyRenderValue</em>&#8216; originates in the example code; so now our task is constrained to the implementation of a single unknown method.  But what a method it is.  &#8216;<em>CA_copyRenderValue</em>&#8216; returns an instance of a fairly obtuse structure that has the following definition:</p>
<pre class="brush: cpp; title: ; notranslate">struct RenderValueResult { 
	int (**x1)(); 
	struct MyAtomic { 
		struct { 
			NSInteger x; 
		} _v; 
	} x2; 
} *_renderValueResult;</pre>
<p>I&#8217;ve changed the name of the structure and its nested structure to avoid any issues with name collisions, but since the order and type of fields matches the private-API version there should be no issues in terms of compatibility between the different declared versions.  At runtime this structure should be indistinguishable from the private-API version for all practical purposes (barring reflection, which could detect the difference in the naming).</p>
<p>Anyways, this structure contains two fields; &#8216;<em>x1</em>&#8216;, which is a pointer to an array of functions that return integers, and &#8216;<em>x2</em>&#8216;, which is simply an integer.  Interestingly enough, the memory address of the returned data structure never differs by more than 256 bytes between calls, nor do the absolute values of &#8216;<em>x1</em>&#8216; or &#8216;<em>x2</em>&#8216; change.  And here is where things start to get a bit murky.  I&#8217;m going to forget about &#8216;<em>x2</em>&#8216; for a moment, as it is a simple type and its value never seems to vary.  &#8216;<em>x1</em>&#8216; is not so easy.</p>
<p>By inspecting the value of &#8216;<em>x1</em>&#8216;, I&#8217;ve determined that it references no more than 11 distinct functions (the 12th element in the result returned by CAFilter is <em>NULL</em>, and I assume that the <em>NULL</em> indicates the probable end of the meaningful data in the array).  Moreover, the addresses of the functions returned do not appear to vary, even between independent runs of the application.  Which implies to me that perhaps the result being returned is simply referencing some pre-existing object in memory.  </p>
<p>But this is all speculation on my part.  What&#8217;s needed here is more digging, so let&#8217;s create our own callback functions and see what we can discover about the way this data structure is being used by the core-animation framework.  We can do that by adding the following to the MyCAFilter implementation:</p>
<pre class="brush: cpp; title: ; notranslate">int (**originalFuncs)();  //cache for the actual function pointers

//copy/paste this 11 times, incrementing both '0's each time...it's inelegant but it works
int callback0( id firstParam, ... ) {
	int myIndex = 0;
	NSLog(@&quot;callback%d invoked, stack=%@&quot;, myIndex, [NSThread callStackSymbols]);
	
	va_list args;
	va_start(args, firstParam);
	int originalResult = originalFuncs[myIndex](firstParam, args);  //pass any params we recieved on to the original function; not sure if this is the correct way to do this
	
	NSLog(@&quot;callback%d will return result:  %d&quot;, myIndex, originalResult);
	
	return originalResult;
}</pre>
<p>And then by revising &#8216;<em>CA_copyRenderValue</em>&#8216; like so:</p>
<pre class="brush: cpp; title: ; notranslate">void* myCallbacks[11] = {&amp;callback0, &amp;callback1, &amp;callback2, &amp;callback3, &amp;callback4, &amp;callback5, &amp;callback6, &amp;callback7, &amp;callback8, &amp;callback9, &amp;callback10};

- (struct RenderValueResult*) CA_copyRenderValue {
	struct RenderValueResult* result = [delegate CA_copyRenderValue];	//TODO:  remove delegate
	struct RenderValueResult* myResult = malloc(sizeof(struct RenderValueResult));
	myResult-&gt;x2 = result-&gt;x2;  //just copy the integer component of the result; 65538?
	
	//see how many functions there are before we encounter a NULL
	int funcIndex = 0;
	while (result-&gt;x1[funcIndex] != NULL) {
		funcIndex++;
		if (funcIndex &gt;= 11) {
			NSLog(@&quot;CA_copyRenderValue;  NULL sigil not found, assuming max number of functions is 11!&quot;);
			break;
		}
	}
	NSLog(@&quot;CA_copyRenderValue;  found %d functions in delegate's result...&quot;, funcIndex);
	
	myResult-&gt;x1 = malloc(sizeof(int*) * (funcIndex + 1));		//we return this to the CA framework
	originalFuncs = malloc(sizeof(int*) * (funcIndex));			//we keep references to the original functions to use in our callbacks
	for (int index = 0; index &lt; funcIndex ; index++) {
		originalFuncs[index] = result-&gt;x1[index];		//cache the original function pointers
		myResult-&gt;x1[index] = myCallbacks[index];     //put dummy callbacks into the result
	}
	myResult-&gt;x1[funcIndex] = NULL;
	
    NSLog(@&quot;CA_copyRenderValue; result=0x%08X, result.x1=0x%08X, result.x2=%d&quot;, result, result-&gt;x1, result-&gt;x2);
	for (int index = 0; index &lt; funcIndex; index++) {
		NSLog(@&quot;CA_copyRenderValue; result-&gt;x1[%d]=0x%08X&quot;, index, result-&gt;x1[index]);
	}
	return myResult;
}</pre>
<p>Now if we run the application, we get the following output:</p>
<pre class="brush: plain; title: ; notranslate">2011-02-09 23:50:25.508 PageCurlDemo[10453:207] callback3 invoked, stack=(
	0   PageCurlDemo                        0x00006c2b callback3 + 50
	1   QuartzCore                          0x00d63347 CACopyRenderArray + 188
	2   QuartzCore                          0x00cc373e -[CALayer(CALayerPrivate) _copyRenderLayer:layerFlags:commitFlags:] + 1667
	3   QuartzCore                          0x00cc30b4 CALayerCopyRenderLayer + 55
	4   QuartzCore                          0x00cc11d2 _ZN2CA7Context12commit_layerEP8_CALayerjjPv + 122
	5   QuartzCore                          0x00cc10e1 CALayerCommitIfNeeded + 323
	6   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	7   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	8   QuartzCore                          0x00caf7b9 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 1395
	9   QuartzCore                          0x00caf0d0 _ZN2CA11Transaction6commitEv + 292
	10  QuartzCore                          0x00cdf7d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99
	11  CoreFoundation                      0x00ef8fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27
	12  CoreFoundation                      0x00e8e0e7 __CFRunLoopDoObservers + 295
	13  CoreFoundation                      0x00e56bd7 __CFRunLoopRun + 1575
	14  CoreFoundation                      0x00e56240 CFRunLoopRunSpecific + 208
	15  CoreFoundation                      0x00e56161 CFRunLoopRunInMode + 97
	16  GraphicsServices                    0x0184c268 GSEventRunModal + 217
	17  GraphicsServices                    0x0184c32d GSEventRun + 115
	18  UIKit                               0x002d242e UIApplicationMain + 1160
	19  PageCurlDemo                        0x00002904 main + 102
	20  PageCurlDemo                        0x00002895 start + 53
)
2011-02-09 23:50:25.514 PageCurlDemo[10453:207] callback3 will return result:  9
2011-02-09 23:50:25.519 PageCurlDemo[10453:207] callback3 invoked, stack=(
	0   PageCurlDemo                        0x00006c2b callback3 + 50
	1   QuartzCore                          0x00ce5d12 _ZN2CA6Render7Encoder13encode_objectEPKNS0_6ObjectE + 30
	2   QuartzCore                          0x00ce670d _ZNK2CA6Render5Array6encodeEPNS0_7EncoderE + 113
	3   QuartzCore                          0x00ce5f24 _ZNK2CA6Render5Layer6encodeEPNS0_7EncoderE + 458
	4   QuartzCore                          0x00ce5cdb _ZN2CA6Render17encode_set_objectEPNS0_7EncoderEmjPNS0_6ObjectEj + 91
	5   QuartzCore                          0x00cc1215 _ZN2CA7Context12commit_layerEP8_CALayerjjPv + 189
	6   QuartzCore                          0x00cc10e1 CALayerCommitIfNeeded + 323
	7   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	8   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	9   QuartzCore                          0x00caf7b9 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 1395
	10  QuartzCore                          0x00caf0d0 _ZN2CA11Transaction6commitEv + 292
	11  QuartzCore                          0x00cdf7d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99
	12  CoreFoundation                      0x00ef8fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27
	13  CoreFoundation                      0x00e8e0e7 __CFRunLoopDoObservers + 295
	14  CoreFoundation                      0x00e56bd7 __CFRunLoopRun + 1575
	15  CoreFoundation                      0x00e56240 CFRunLoopRunSpecific + 208
	16  CoreFoundation                      0x00e56161 CFRunLoopRunInMode + 97
	17  GraphicsServices                    0x0184c268 GSEventRunModal + 217
	18  GraphicsServices                    0x0184c32d GSEventRun + 115
	19  UIKit                               0x002d242e UIApplicationMain + 1160
	20  PageCurlDemo                        0x00002904 main + 102
	21  PageCurlDemo                        0x00002895 start + 53
)
2011-02-09 23:50:25.527 PageCurlDemo[10453:207] callback3 will return result:  9
2011-02-09 23:50:25.536 PageCurlDemo[10453:207] callback3 invoked, stack=(
	0   PageCurlDemo                        0x00006c2b callback3 + 50
	1   QuartzCore                          0x00ce5d34 _ZN2CA6Render7Encoder13encode_objectEPKNS0_6ObjectE + 64
	2   QuartzCore                          0x00ce670d _ZNK2CA6Render5Array6encodeEPNS0_7EncoderE + 113
	3   QuartzCore                          0x00ce5f24 _ZNK2CA6Render5Layer6encodeEPNS0_7EncoderE + 458
	4   QuartzCore                          0x00ce5cdb _ZN2CA6Render17encode_set_objectEPNS0_7EncoderEmjPNS0_6ObjectEj + 91
	5   QuartzCore                          0x00cc1215 _ZN2CA7Context12commit_layerEP8_CALayerjjPv + 189
	6   QuartzCore                          0x00cc10e1 CALayerCommitIfNeeded + 323
	7   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	8   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	9   QuartzCore                          0x00caf7b9 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 1395
	10  QuartzCore                          0x00caf0d0 _ZN2CA11Transaction6commitEv + 292
	11  QuartzCore                          0x00cdf7d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99
	12  CoreFoundation                      0x00ef8fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27
	13  CoreFoundation                      0x00e8e0e7 __CFRunLoopDoObservers + 295
	14  CoreFoundation                      0x00e56bd7 __CFRunLoopRun + 1575
	15  CoreFoundation                      0x00e56240 CFRunLoopRunSpecific + 208
	16  CoreFoundation                      0x00e56161 CFRunLoopRunInMode + 97
	17  GraphicsServices                    0x0184c268 GSEventRunModal + 217
	18  GraphicsServices                    0x0184c32d GSEventRun + 115
	19  UIKit                               0x002d242e UIApplicationMain + 1160
	20  PageCurlDemo                        0x00002904 main + 102
	21  PageCurlDemo                        0x00002895 start + 53
)
2011-02-09 23:50:25.566 PageCurlDemo[10453:207] callback3 will return result:  9
2011-02-09 23:50:25.578 PageCurlDemo[10453:207] callback4 invoked, stack=(
	0   PageCurlDemo                        0x00006cca callback4 + 50
	1   QuartzCore                          0x00ce670d _ZNK2CA6Render5Array6encodeEPNS0_7EncoderE + 113
	2   QuartzCore                          0x00ce5f24 _ZNK2CA6Render5Layer6encodeEPNS0_7EncoderE + 458
	3   QuartzCore                          0x00ce5cdb _ZN2CA6Render17encode_set_objectEPNS0_7EncoderEmjPNS0_6ObjectEj + 91
	4   QuartzCore                          0x00cc1215 _ZN2CA7Context12commit_layerEP8_CALayerjjPv + 189
	5   QuartzCore                          0x00cc10e1 CALayerCommitIfNeeded + 323
	6   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	7   QuartzCore                          0x00cc1069 CALayerCommitIfNeeded + 203
	8   QuartzCore                          0x00caf7b9 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 1395
	9   QuartzCore                          0x00caf0d0 _ZN2CA11Transaction6commitEv + 292
	10  QuartzCore                          0x00cdf7d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99
	11  CoreFoundation                      0x00ef8fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27
	12  CoreFoundation                      0x00e8e0e7 __CFRunLoopDoObservers + 295
	13  CoreFoundation                      0x00e56bd7 __CFRunLoopRun + 1575
	14  CoreFoundation                      0x00e56240 CFRunLoopRunSpecific + 208
	15  CoreFoundation                      0x00e56161 CFRunLoopRunInMode + 97
	16  GraphicsServices                    0x0184c268 GSEventRunModal + 217
	17  GraphicsServices                    0x0184c32d GSEventRun + 115
	18  UIKit                               0x002d242e UIApplicationMain + 1160
	19  PageCurlDemo                        0x00002904 main + 102
	20  PageCurlDemo                        0x00002895 start + 53
)</pre>
<p>Followed by a crash.  Something causes the attempt to invoke the fourth callback function to die; probably related to the questionable way that I&#8217;m passing arguments to it.  Not knowing what the proper signature for the callback functions is, I&#8217;ve made them all accept a variable number of &#8216;id&#8217; parameters, which should cover most cases.  However the best way to pass these arguments on to the original implementation is not clear.  </p>
<p>For what it&#8217;s worth, I tried a number of alternate ways to invoke this function, all of which resulted in a crash.  Skipping the invocation and just returning a hard-coded value from my callback prevented the crash, but didn&#8217;t result in any more callbacks being invoked.  Presumably core-animation noticed that my hard-coded return value didn&#8217;t match what it was expecting, and decided to abort the rest of its rendering transaction.  </p>
<p>And unfortunately, here is where I need to leave this interesting little diversion for now, unless/until I can figure out a way to move it forward.  If you have any suggestions please don&#8217;t hesitate to let me know.  I feel like I&#8217;m getting close to the answer here, but it&#8217;s still quite a ways away.</p>
<p><strong>Update</strong></p>
<p>If anyone is interested, you can download a complete <a href="http://codethink.no-ip.org/PageCurlDemo_modified.zip">XCode project</a> containing the latest revision of my code.  If you decide to take a crack at solving this problem, I wish you luck, and please do consider reporting back with your results.</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/320/feed</wfw:commentRss>
		<slash:comments>10</slash:comments>
		</item>
		<item>
		<title>Android vs. iPhone; A Developer&#8217;s Comparison</title>
		<link>https://codethink.no-ip.org/archives/188</link>
		<comments>https://codethink.no-ip.org/archives/188#comments</comments>
		<pubDate>Mon, 31 Jan 2011 09:40:48 +0000</pubDate>
		<dc:creator><![CDATA[aroth]]></dc:creator>
				<category><![CDATA[banter]]></category>
		<category><![CDATA[coding]]></category>
		<category><![CDATA[android]]></category>
		<category><![CDATA[iphone]]></category>

		<guid isPermaLink="false">http://codethink.no-ip.org/wordpress/?p=188</guid>
		<description><![CDATA[So I&#8217;ve had a bit of exposure to both the iPhone and Android SDK&#8217;s, and while my impression of both is generally positive, each one has some of its own unique strengths and weaknesses. Interface Creation/Editing To begin I&#8217;ll focus &#8230; <a href="https://codethink.no-ip.org/archives/188">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>So I&#8217;ve had a bit of exposure to both the iPhone and Android SDK&#8217;s, and while my impression of both is generally positive, each one has some of its own unique strengths and weaknesses.  </p>
<h4><b>Interface Creation/Editing</b></h4>
<p>To begin I&#8217;ll focus on one of the iPhone SDK&#8217;s biggest strengths:  Interface Builder.  This is a very slick and polished tool for building out an interface (or for more complex applications, the basic underpinnings of an interface) and hooking it up to the rest of your code, and it completely blows away Android&#8217;s layout-editor.  With Interface Builder, it is generally possible to build exactly what you want using simple drag and drop interactions.  With Android&#8217;s layout editor, often the best you can do is express the general idea of what you want using the graphical editing tools, and then you are forced to manually edit the XML document that the layout editor generates in order to fully realize your idea.  Put simply, I have never once had to manually edit the XML output by Interface Builder, and I have never been able to build a non-trivial Android UI without having to do at least some manual editing of its XML output.</p>
<p>Interface Builder is not without its faults, however; its mechanism for configuring referencing outlets for UI elements is counter-intuitive to the newcomer, at best, and the way it integrates with your XCode project seems to be more as if by magic than through any discernable coupling.  Perhaps you like such behind-the-scenes magic, but personally when I am coding something I like to be able to see how all the tools are fitting together, and that&#8217;s something that you can&#8217;t do with XCode and Interface Builder.  And with respect to configuring referencing outlets I think Google has the better approach here, generating a single class that any other class can use to directly reference any bundled resource (interface components, images, properties files, etc.).  I fault them for calling this class &#8220;R&#8221; and not something more meaningful like &#8220;Resource&#8221;, but the overall concept behind it is sound, and I think superior to Apple&#8217;s approach of requiring that each reference be manually configured by the developer.</p>
<p>And of course, Apple does have a bit of an unfair advantage in the realm of user-interface editing.  They only need to worry about targeting a single device with a single interface resolution (the 2x resolution &#8220;retina display&#8221; models implement an internal scaling algorithm so that they, too, can be targeted using the original iPhone resolution), while Google&#8217;s Android SDK developers need to target a myriad of devices, each one with a potentially unique interface resolution.  In essence the UI for an iPhone application can be effectively specified using a fixed/absolute layout, which greatly simplifies things for Interface Builder.  On the other hand, the lack of a consistent interface resolution in Android devices means that layouts must be specified in a relative fashion that allows them to change as needed to accommodate different screen resolutions.  This makes the task of building an effective interface editor much more complex for the Android guys.  </p>
<p>As an aside, I find it interesting to note that both Android and iPhone use very similar XML-based layout systems, so in theory there is nothing preventing Google&#8217;s implementation from being just as smooth as Apple&#8217;s, apart from a lack of polish.  And I would be that that will come in time.</p>
<h4><b>Platform and Tools</b></h4>
<p>Moving along, and pushing a bit closer to the realm of personal opinion, there is also the difference in development platform, tools, and language to consider.  If you are doing iPhone development, then you have very little choice in the matter.  You must develop on a Mac (or Hackintosh) running the latest version of OS X; your only real option as far as IDE&#8217;s are concerned is XCode, and you&#8217;ll be writing Objective-C.  Android developers have a bit more freedom.  You can choose to develop under OS X, Windows, or Linux (or presumably any other operating-system that can run Java), although you are still essentially locked into a single IDE; Eclipse (you can technically use other IDE&#8217;s, but the Android development plugin only works with Eclipse, so you probably won&#8217;t want to), and you&#8217;ll be writing code in Java.  </p>
<p>While the merits of one OS versus another can be debated endlessly without ever reaching a definitive outcome, I think Android comes out the winner by virtue of letting developers choose their desired operating-system.  Would it be terribly difficult for Apple to do the same?  I don&#8217;t think so, however I very much doubt that they will in the near future.  With respect to IDE&#8217;s, I do personally feel that Eclipse is a better product than XCode, particularly if you have multiple projects that you are working on, and doubly so when SCM is introduced.  But then, I am sure there are some people, somewhere, who genuinely prefer coding in XCode, no?  </p>
<p>And as for Java versus Objective-C, that&#8217;s mostly a matter of personal preference.  Both languages are reasonable, although nowadays there are probably more developers who are comfortable with Java than with Objective-C, and in some areas Objective-C does show its age; particularly in the realm of memory management.  There is also a wider variety of Java-based libraries and build tools available, and using them with an Android project is generally a bit simpler than accomplishing the same task in the Objective-C and iPhone world.  But in the grand scheme of things, it&#8217;s really not enough of a difference to say that one platform is any better than the other here.  Both languages are reasonable, and a skilled Java developer should not have much difficulty adapting to Objective-C, or vice-versa.</p>
<h4><b>SDK Architecture</b></h4>
<p>Even further in the realm of personal opinion lies the relative merits of the SDK frameworks themselves.  The iPhone SDK is built strictly around the model-view-controller pattern, and the development tools all but force you to structure you applications in this same pattern.  You can subvert it if you want to (not that you would), but you really have to go out of your way to do so.  The Android SDK, on the other hand, is a bit more free-form.  Yes, there are model-view-controller overtones throughout, but the pattern is not as thoroughly pervasive as in the iPhone SDK, and more significantly, the developer can choose to follow some other pattern if they prefer without being penalized by the framework SDK and its tools.  </p>
<p>Of course, this freedom comes at a price; model-view-controller is widely viewed as a very good pattern to follow in many circumstances, and freedom to choose a different pattern means freedom to choose a less appropriate pattern, a bad pattern, or even no pattern at all.  Ultimately each SDK is following a different philosophy here, but I don&#8217;t think that one approach is inherently any better than the other.  The rigidity in Apple&#8217;s approach may frustrate beginners and advanced users alike while giving intermediate-level coders some comfort in its uniformity and predictability.  And while the extra freedom afforded by the Android SDK may prove useful to the skilled developers, it also makes it easier for neophytes to dig themselves into a hole and requires that the intermediate-level developer devote more thought to the overall structure and architecture of their code.  Each approach has its own pros and cons.</p>
<h4><b>Persistence Layer</b></h4>
<p>Both platforms start out equal here, with both Android and iPhone providing developers with access to an SQLLite database for use within their applications.  Apple goes one step further, however, and provides <a href="http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/cdProgrammingGuide.html" target="_blank">Core Data</a>; an ORM framework that runs on top of the <a href="http://www.sqlite.org/" target="_blank">SQLLite</a> database and abstracts away the actual database operations.  As such, the iPhone SDK enjoys a slight advantage here.  While Core Data is a bit crufty in its terminology and somewhat restricted in its functionality, it is still much better in most cases than writing SQL directly.  </p>
<p>But even ignoring its pedantic nature and functional limitations, Core Data is not perfect.  If you change your data model after your application has been released you will be left with little choice but to go through the Core Data <a href="http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmMigrationProcess.html#//apple_ref/doc/uid/TP40005508-SW1" target="_blank">migration process</a> (if you don&#8217;t, you app will simply crash the moment it tries to access the old database instance).  And while this process is generally adequate for simple migrations using simple schemas populated with a relatively small number of entities, you are in for an onerous time if you have a complex data model populated with several hundred (or more) entities.  You may experience crashes caused by memory-management issues internal to the Core Data framework (I suspect that it doesn&#8217;t periodically re-fault entities as it migrates them, causing them to &#8220;leak&#8221; during the migration process), and there is no straightforward way to wrest control away from the framework to try and rescue things yourself. </p>
<p>In this sense, having to write some manual SQL statements to update your data model may well be preferable to Core Data.  You have full control over the process yourself, and the ability to ensure that the migration is performed in an efficient way that won&#8217;t crash the system or cause the app to fail if it is opened with an outdated schema revision.  And it&#8217;s worth noting that while Android doesn&#8217;t provide any explicit ORM framework for developers to use, it also doesn&#8217;t prohibit developers from bundling an ORM framework of their choice with their app.  As the platform uses a <a href="http://www.infoq.com/news/2007/11/android-java" target="_blank">custom Java SDK and runtime</a>, there are quite a few <a href="http://stackoverflow.com/questions/371538/any-good-orm-tools-for-android-development" target="_blank">potential candidates</a> available; though sadly not long-time favorites like Hibernate and other similar tools.</p>
<p>So you can have ORM on Android if you want it, you just have to pay the cost of bundling and configuring it yourself.  Overall it&#8217;s not a bad solution, but it still lacks the convenience of Apple&#8217;s approach with Core Data, which is entirely adequate to cover the needs of most developers, most of the time.  As such, the iPhone SDK comes out slightly ahead of the Android SDK here.  </p>
<h4><b>Device Emulators</b></h4>
<p>Another important part of both the iPhone and the Android SDK is their device simulator/emulator software.  And here is another area where the iPhone SDK shows its polish.  The iPhone emulator is fast, sleek, and looks and feels very much like an actual iPhone.  It has an on-screen keyboard that works just like the on-screen keyboard on the actual device, and the same applies to all the standard built-in navigation components as well.  The Android emulator, on the other hand, feels and looks a lot more like a window with an Android UI drawn into it.  There is no on-screen keyboard, instead you get a kind of sad-looking grid of keys that sits next to the Android window, and the same applies to the navigation buttons.  The Android simulator feels a lot less like a simulator, and a lot more like a tool.  </p>
<p>But again, much of this is not Google&#8217;s fault.  Apple only has exactly one device to worry about, and they have complete control over and advance knowledge of its capabilities.  Furthermore, they benefit more from having a robust simulator, because they are in the business of manufacturing physical handsets far more than Google is or ever was (Nexus One notwithstanding).  Android has literally dozens of different devices to worry about, some of which have physical keyboards, some of which do not, and each of which may implement a unique navigation layout/paradigm.  In this light some of their simulator&#8217;s limitations start to make sense, and I will say that Google has come up with a good solution to this issue; creating a virtual Android device of any desired configuration is quite simple, and works exactly as it should.  In the end, while the iPhone emulator certainly feels a bit nicer to use and delivers an experience that more closely mirrors the actual device that it is simulating, both emulators are acceptable when it comes to actually getting the job of testing and debugging done.</p>
<p>One other thing I feel I must say about the Android simulator, unfortunately, is that it is slow.  Painfully so.  It&#8217;s tempting to blame Java for this shortfall, but I&#8217;ve been around Java enough to know that well-designed and optimized Java code is very nearly as fast as compiled binaries.  The difference in performance between the Android emulator and the iPhone emulator is an order of magnitude greater than any variance that I would credit to the Android emulator being Java-based where the iPhone emulator is a native binary.  As with some of the other Android development tools, I suspect the problem here is mainly a lack of polish, and that the situation will improve in time.</p>
<h4><b>Device Provisioning</b></h4>
<p>Lastly, deploying a development copy of an application to a physical iPhone for testing is a ridiculously circuitous process (at least initially), requiring multiple round-trips between the developer and Apple to generate, configure, and download the certificates and provisioning profiles necessary to cajole an iPhone into accepting a developer&#8217;s application.  To further complicate matters, these certificates expire periodically and must be refreshed, and they also impose various other limitations on what actions a developer or development team is allowed to take.  All told, it is a very developer-unfriendly process, and probably an offshoot of Apple&#8217;s &#8220;we control everything that happens on our devices&#8221; paranoia.  </p>
<p>Contrast this with the Android approach, where all you need to install a development copy of an application is a device and a USB cable (and a custom USB driver, if you are developing on Windows).  There is simply no contest here, Android&#8217;s approach to application deployment/provisioning is vastly superior to Apple&#8217;s from the developer&#8217;s point of view.  That is, unless you happen to like jumping through a series of hoops for no good reason; in which case Apple has you covered.</p>
<h4><b>Conclusion</b></h4>
<p>So which is better; iPhone SDK or Android SDK?  Neither, really.  While the iPhone SDK is absolutely more polished than the Android SDK, both provide usable tools that make building an iPhone or Android application a relatively straightforward process.  What the Android SDK lacks in polish it tends to make up for in flexibility, accessibility/developer-friendliness, and a greater availability of open-source third-party libraries, tools, and plugins.  So pick your preference, and start coding!</p>
]]></content:encoded>
			<wfw:commentRss>https://codethink.no-ip.org/archives/188/feed</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
	</channel>
</rss>
