#viewWillAppear
Explore tagged Tumblr posts
jacob-cs · 7 years ago
Link
original source : https://iphonecodecenter.wordpress.com/2015/10/06/life-cycle-of-view-controller-in-iphone/
Life cycle of view controller in iphoneViewController LifeCycle
Today we are going to discuss only about the ViewController LifeCycle and How we should use each of the methods provided in the lifecycle event.
What is LifeCycle?LifeCycle is an event which have certain steps from the point of creation to deletion. So how do we know about the steps during this period? A Sequence of methods are called as they progress through the LifeCycle.Now we may need to perform different kind of actions at different steps of the life cycle and thus we commonly override these methods and perform these actions.
As I said the Start of LifeCycle is Creation. Most of the MVCs are Instantiated through the StoryBoard. So what happen after the creation of the ViewController?
Outlet setting
View Appear and Disappear
Changes in the  Geometry
Memory Warnings
At each of the above step iOS invokes methods on the viewControllers. We will discuss each of the methods in detail.
ViewController LifeCycle Methods
awakeFromNib
This is Strictly not part of the viewController lifeCycle but still play a role in the initialization of the viewController. This method is called on all the objects which come out of the StoryBoard. This happens before all the outlet are set i.e. before the loading of the view. Before you put any code in here you must think if you can put it somewhere else may be viewDidLoad or viewWillAppear.
Note: Init method is not called on the objects which came out of the StoryBoard. So you might want to put the code written in the Init method in the awakeFromNib method too.
viewDidLoad
– (void)viewDidLoad
{    
      [super viewDidLoad];   
      // Do any additional setup after loading the view, typically from a nib.
}
This is good place to put your setup code for the View. Always call the super class method to complete the lifeCycle methods. At this point of time of LifeCycle we are not sure of the Geometry of the Device so you do not put any code that is based on the Geometry of the view.
viewWillAppear
– (void)viewWillAppear:(BOOL)
animated Notification to the viewController when view is just about to View on the screen. animated argument tells whether you are appearing Instantly or after sometime via an animation. This method can be overridden to change the color or something else of the status bar according to the orientation or as per the style of the view.
Note: As you understand that view will be loaded once, however it will appear and disappear again and again. So you do not want to write a piece of code in this method which actually belongs in viewDidLoad, otherwise you will be doing the same thing again and again.
viewWillDisappear
– (void)viewWillDisappear:(BOOL)
animated Notify the controller that view is about to be removed from the screen. This is the place you will write code like storing the state of the view or Cleaning up the resources being used. animated argument is again to tell whether are are disappearing instantly or via an animation.We might not want to write code thats time consuming. We can kick off some thread which can do the task in the background.There are did versions of both the viewWillAppear and viewWillDisapper methods viewDidAppear and viewDidDisappear which are called after the view is appeared and view is disappeared relatively.
viewWillLayoutSubviews
– (void)viewWillLayoutSubviews
This method is invoked when frame changed its layout and subviews were this re-layout. In this method you can reset the view controls to set for the new layout. However, with iOS 6 and later we do not need this. AutoLayout take care of these layout changes and we can add constraints to the application and this will happen automatically.
didReceiveMemoryWarning
– (void)didReceiveMemoryWarning
This method is rarely called. But if you are doing memory consuming tasks in your application like playing with the large images or playing sounds etc. than you should consider handling this method. To avoid memory warning anything which is memory consuming and can be recreated should be released means you should set the pointer to nil.
참고 사항 ) https://bradbambara.wordpress.com/2015/01/18/object-life-cycle-uiview/
initWithCoder:
     layerClass
     setNeedsDisplay
     addConstraints:
               addConstraint: (can happen multiple times)
willMoveToSuperview: invalidateIntrinsicContentSize didMoveToSuperview awakeFromNib willMoveToWindow: needsUpdateConstraints didMoveToWindow setNeedsLayout updateConstraints
     intrinsicContentSize layoutSubviews (can happen multiple times) drawRect
0 notes
giniland-blog · 8 years ago
Text
[Swift3&Xcode8]タブ切り替える度にwebviewリロードするswift3コード
ViewLoad,ViewAppear���理について viewDidLoad ・View が初めて呼び出される時に1回だけ呼ばれます。 ・アプリ起動後に初めて当Viewが表示された場合に1度だけ呼ばれます。 viewWillAppear ・View が表示される直前に呼ばれるメソッド ・タブ等の切り替え等により、画面に表示されるたびに呼び出されます。 ・タブが切り替わるたびに何度でも呼ばれます。 viewDidAppear ・View の表示が完了後に呼び出されるメッソド ・タブ等の切り替え等により、画面に表示されるたびに呼び出されます。 ・タブが切り替わるたびに何度でも呼ばれます。 viewWillDisappear ・View が他のView (画面から消える) 直前に呼び出されるメッソド ・View が他のView (画面から消える) 直前に呼び出されるメッソド…
View On WordPress
0 notes
whitescissors · 4 years ago
Link
sheetのviewWillDisappearの話
0 notes
praeclarum · 8 years ago
Text
"Hotdog or Not" Using Azure Custom Vision, CoreML, and Xamarin
TL;DR I used Microsoft’s Custom Vision service to train a CoreML model and wrote an iOS app in Xamarin to execute it in less than two hours. It has a loose tie-in with a popular television show. Code on GitHub. You can hear James and I discuss this on Merge Conflict.
Machine Learning Is Easy Now?
Microsoft released a webapp called Custom Vision as a part of their Azure Cognitive Services. While Microsoft has been a player in machine learning for awhile now, this new service is special for one reason: it can export CoreML models.
CoreML is a new feature of iOS 11 that enables apps to execute neural networks (and other ML models) locally on the device. While this has always been possible, Apple made execution easy.
All you need is a trained model and all of a sudden your app can do fancy AI tricks - all locally without needing a network connection and without sharing information with third parties.
The only trick is that you need to train a model. While there are certainly pre-trained models online that you can download, chances are they won’t do exactly what you want.
To train a CoreML model, you would follow the Keras tutorials and examples. Keras, while amazingly powerful, is neither easy to learn nor easy to use even after you’ve learned it. Eventually your skills increase and you can use it, but it does take quite some effort. It also takes some money - training deep networks is slow on standard PC hardware. Soon you’ll be buying fast GPUs or paying for virtual machines out in the cloud.
Now with Custom Vision, Microsoft has made training easy. Instead of learning Keras and finding some fast GPUs to run it on, you can just use Microsoft’s web app. They use Azure’s infrastructure to find machines and, most importantly, they don’t require that you learn how to train networks manually. Instead, there is a GUI that holds your hand, lets you experiment, and keeps everything organized for you.
In this version, they made training easy only for a particular kind of model: recognizing the dominant object in an image. This is a classic task for CNN based neural networks to solve because they’re really good at it and it’s a useful capability with numerous real-world applications. It’s a great choice for Microsoft to lead with this type of model.
So that’s the hype. But does it work?
I tried training a new model and writing an app to execute it to find out. Since I wasn’t confident in my success (and perhaps had too many beers while extolling the virtues of ML to friends), I decided to make it an easy problem: hot dog or not. The app would take a picture and decide if the dominant object in the scene is a hotdog. Yes mom and dad, I’m really putting my degree to use.
I wrote my experience below as a tutorial for doing these kinds of trainings yourself. If you follow along, you’ll be able to write an iOS ML app yourself.
Step 1. Gather Training Data
No matter how skilled you are as a data scientist you will always be terrible at one thing - gathering training data.
We need two sets of images to train our model: one set of hotdogs and another of not hotdogs.
Sounds easy right? Well sure it is until you start actually doing it. Quickly you’ll run up against questions and troubling biases:
Is a drawing of a hotdog a hotdog? (Aristotle would be proud.)
Are two hotdogs a hotdog? (What about 3?)
Should I have an equal number of hotdogs with mustard as hotdogs with ketchup? (Should you bias the network towards your a priori view of the world? Are your biases personal or universal?)
Should I have an equal number of images of hotdogs and not hotdogs? (Since nearly all objects in the universe are not hotdogs, just how strong should our bias be?)
Why do people dress up their dogs as hotdogs?
The list goes on and on. You will confront your biases when collecting training data. Those biases will then be passed onto the network you train. You’ve been warned.
Thankfully the nature of this app precludes the need to do much soul searching for biases towards hotdogs. So I made some executive decisions:
No, drawings are not hotdogs
Yes, many hotdogs are a hotdog
Bias towards ketchup because it’s better
Bias towards not hotdogs since people love to try to trick these kinds of apps
Just accept it
Data collection takes a long time too even with Google’s assistance. After an hour of dragging and dropping, I ended up with 75 images of hotdogs and 175 images of not hotdogs. (I could have written a script but we all know how deep that rabbit hole is.)
For anyone who’s trained a CNN before, you know that this is a very small training set. Even more absurdly, Custom Vision only requires 5 images of each type. What’s going on here?
While Microsoft doesn’t explain the details, my guess is that they are fine-tuning a model already trained on images. The idea is that you take a trained model and then re-train only a part of it on new data. The hope is that the majority of the model is general purpose and can be reused. This saves training time and also reduces the required training set size. I’m not sure if this is what they’re doing, but I’m relieved that I don’t have to gather tens of thousands of images.
Of course with all ML, more data is better. But my fingers were getting tired. (Also, Custom Vision is currently capped at 1,000 training images.)
Step 2. Create a Project
You will want a project for each network you train. Projects hold all of your images and your trained models. You will end up training multiple times because it’s good to experiment with different training set sizes and compositions.
Create an account on https://www.customvision.ai. It’s free!
Create a New Project.
Tumblr media
I named the project MyHotDogOrNot, gave it a banal description, and then chose the domain General (compact).
Domains are starting points for your trained model. If you are using cognitive services as a web API, then you should choose whichever domain most closely matches your training data.
General (compact) is the only domain that supports CoreML export so we must choose that. Hopefully Microsoft will allow us to use the other domains in the future in order to improve accuracy.
Step 3. Create Tags
When you’re viewing your project, you will see a list of tags. We need to make this list match the types of training images gathered.
Click the + at the top of the Tags list.
Create two tags: hotdog and not-hotdog.
Tumblr media
When you’re done, you’ll see a list of your tags. The (0) means there are no images yet associated with the tags.
Tumblr media
Step 4. Upload Training Data
You can upload all the images with the same tag using just one command.
Choose Add images from the toolbar and select all of your hotdog images.
Add the tag hotdog.
Click Upload files.
Repeat for the tag not-hotdog.
Tumblr media
Step 5. Train the Model
So let’s train this thing already.
Click the big green Train button.
Go to the Performance tab and wait for your “Iteration” to finish.
Tumblr media
When training is complete you will see the performance screen with the overall Precision and Recall of the model. In my case, I get slightly better results detecting not-hotdog than hotdog but they’re both great numbers so why fret.
Of course, these numbers don’t mean your network will work in the real world since the performance is measured against images you hand selected (with all your gross human biases). That said, you can use them as rough indicators of the relative performance of one training iteration against another.
Step 6. Export the CoreML Model
Finally, we can retrieve the CoreML file.
Click Export from your iteration’s performance screen.
Choose iOS 11 (CoreML) from the platform selection screen.
Click Export.
Click Download.
You will now have a fancy .mlmodel model file. Rename it to something nice.
If you open it with Xcode you will see its inputs and outputs.
Tumblr media
We can see that its input is a 227 x 227 pixel image named data and its output includes a classLabel string that will be the model’s best judgement and also a loss output that will give a closeness measure for each of our tags.
Step 7. Write an App
At this point we have a model file and just need to put a UI on it.
To keep the code to a minimum, I’m going to use the Vision framework to execute the CoreML model. This framework makes resizing images to our required 227x227 dimensions easy and also takes care of numerical and pixel format conversions.
I will also use ARKit to display the camera on the screen. This is most definitely overkill, but it greatly reduces the amount of code we need to write to deal with the camera.
First, create a new Single View app.
Modify ViewController.cs to add an AR view.
// In ViewController readonly ARSCNView cameraView = new ARSCNView (); public override void ViewDidLoad () { base.ViewDidLoad (); cameraView.Frame = View.Bounds; cameraView.AutoresizingMask = UIViewAutoresizing.FlexibleDimensions; View.AddSubview (cameraView); }
Perform the standard management of that view. This is all we need to get a live camera preview.
// In ViewController public override void ViewWillAppear (bool animated) { base.ViewWillAppear (animated); var config = new ARWorldTrackingConfiguration { WorldAlignment = ARWorldAlignment.Gravity, }; cameraView.Session.Run (config, (ARSessionRunOptions)0); } public override void ViewWillDisappear (bool animated) { base.ViewWillDisappear (animated); cameraView.Session.Pause (); }
Add the model to the resources section of your app.
Tumblr media
Add code to load the model. Models need to be compiled before they can be loaded. If you have access to Xcode, you can pre-compile your models. Compiling on the device is pretty fast so we won’t bother with that optimization. (I do this loading in the view controller’s ViewDidLoad method but you should architect your app better by doing this work on a background task.)
This also includes code to initialize the Vision request that we will make. Requests can be used for multiple images so we initialize it once. When a request completes, HandleVNRequest will be called.
// In ViewController MLModel model; VNCoreMLRequest classificationRequest; // In ViewController.ViewDidLoad () var modelUrl = NSBundle.MainBundle.GetUrlForResource ( "HotDogOrNot", "mlmodel"); var compiledModelUrl = MLModel.CompileModel (modelUrl, out var error); if (error == null) { model = MLModel.Create (compiledModelUrl, out error); if (error == null) { var nvModel = VNCoreMLModel.FromMLModel (model, out error); if (error == null) { classificationRequest = new VNCoreMLRequest (nvModel, HandleVNRequest); } } }
Add a tap handler that will respond to any taps on the screen (I like simple UIs). When a tap is detected, the Vision framework will be used to perform the model execution.
// In ViewController.ViewDidLoad () cameraView.AddGestureRecognizer (new UITapGestureRecognizer (HandleTapped)); // In ViewController void HandleTapped () { var image = cameraView.Session?.CurrentFrame?.CapturedImage; if (image == null) return; var handler = new VNImageRequestHandler (image, CGImagePropertyOrientation.Up, new VNImageOptions ()); Task.Run (() => { handler.Perform (new[] { classificationRequest }, out var error); }); } void HandleVNRequest (VNRequest request, NSError error) { if (error != null) return; var observations = request.GetResults () .OrderByDescending (x => x.Confidence); ShowObservation (observations.First ()); }
Finally, in ShowObervation we present an alert of the model’s best guess.
// In ViewController void ShowObservation (VNClassificationObservation observation) { var good = observation.Confidence > 0.9; var name = observation.Identifier.Replace ('-', ' '); var title = good ? $"{name}" : $"maybe {name}"; var message = $"I am {Math.Round (observation.Confidence * 100)}% sure."; BeginInvokeOnMainThread (() => { var alert = UIAlertController.Create (title, message, UIAlertControllerStyle.Alert); alert.AddAction (UIAlertAction.Create ("OK", UIAlertActionStyle.Default, _ => { })); PresentViewController (alert, true, null); }); }
And that’s it, we now have an app that can detect hot dogs (or not)!
Tumblr media
You can find the complete source code on GitHub.
Conclusion
It’s great to see Microsoft and Apple technologies working together to make adding a powerful feature to apps easier. If you made it this far, you saw how I was able to build the app in less than two hours and I think you can see that it’s pretty easy to make your own ML apps.
If you enjoyed this you will probably enjoy listening to James Montemagno and I discuss it on our podcast Merge Conflict. 
11 notes · View notes
siva3155 · 6 years ago
Text
300+ TOP iOS Interview Questions and Answers
iOS Interview Questions for freshers experienced :-
1.What is latest iOS version? The latest version of iOS is 13. 2. What is the output of the following Program? let numbers = let numberSum = numbers.reduce(0, { $0 + $1}) 10 3. Conversion of Error to optional Value is used as a method to handle Errors in Swift (True/False) True 4. Structures can be inherited(True/False) Flase 5. What is the Output of Following Program var randomly Array: = print (randomArray) Compilation Error (1 is an Int not object) 6. What is the Output Of Following Program class FirstClass { func doFunction() { print(“I am superclass”) }} class SecondClass: Firstclass { override func doFunction() { print(“I am subclass”) }} let object = SecondClass() object.doFunction() I am subclass 7. What is ARC? Automatic Reference Counting 8. What is ARWorld Map? An ARWorldMap object contains a snapshot of all the spatial mapping information that ARKit uses to locate the user’s device in real-world space. 9. How shared AR Experience is Created? Once the AR world map created from once device is transferred to the other device using P2P connectivity or some other reliable source, the other devices can also have the same AR experience similar to the first device. To create an ongoing shared AR experience, like placing an AR object, When one user taps in the scene, the app creates an anchor and adds it to the local ARSession, instead of sending the whole map we can serializes that ARAnchor using Data and sends it to other devices in the multipeer session. 10. What are the main UITableViewDataSource methods that are needed to display data in table view? numberOfSection, numberOfRowsInSection, cellForRowAtIndexPath
Tumblr media
iOS Interview Questions 11. What are the two basic things that are needed to create an NSFetchRequest in CoreData? EntityName, SortDescriptor/NSPredicate 12. What are Tuples in Swift? Tuples are Temporary container for Multiple Values. It is a comma-separated list of types, enclosed in parentheses. In other words, a tuple groups multiple values into a single compound value. 13. What are the Control Transfer Statements in swift? break, continue, fallthrough, return, throw 14. What are closures and write an example program for closure? Closures are self-contained blocks of functionality that can be passed around and used in code. var addClosure = { (a: Int, b: Int) in return a + b } let result = addClosure(1,2) print(result) 15. Briefly explain reference types and value types with examples? Classes are reference types and Structures are value types. When a class is assigned to variable and copied only the reference to the original value are passed and so any consecutive changes to the assigned values will also affect the original value. But in struct when it is assigned to another variable a copy is generated which will not have any effect on the original value when changed. 16. What are the features of Swift ? Variables are always initialized before use. Memory is managed automatically. Arrays and integers are checked for overflow. Switch function can be used instead of using “if” statement. It eliminates the classes that are in unsafe mode. 17. What is the significance of “?” in Swift? The question mark (?) is used during the declaration of a property can make a property optional. 18. What Is Initialization ? This process involves setting an initial value for each stored property on that instance and performing any other setup or it is required before the new instance is ready for use. e.g.: keyword: init() { // code initialization here. } 19. What is NSURL Session? NSURL Session is a replacement for NSURL Connection, and similarly it is both a distinct class and also a group of related APIs. 20. What are the Session Configuration? NSURL Session Configuration provides a few factory methods to create your session configuration. The methods are: Default Configuration – provides anccess to the global singleton storage and settings. Ephemeral Session Configuration – a private, in-memory only storage Background Session Configuration – out-of-process configuration that is keyed to the identifier string. 21. How to define kSomeConstant ? e.g.:let kSomeConstant: Int = 80 It is Implicitly defined as an integer. If you want to be more specific you can specify it like above e.g.. 22. Define hash value vs raw value ? HashValue:- If you had the enum as we had declared earlier without the type, there is no rawValue available but instead you get a member called hashValue. RawValue:- The rawValue on the other hand is a type value that you can assign to the enum members. 23. Define Static Binding and Dynamic Binding? Static Binding: It is resolved at “Compile time” Method overloading is an example of static binding. Dynamic Binding: It is virtual binding resolved at a “Run Time”. Method overriding is an example of Dynamic Binding. 24. What is Method Overloading? It defines a method with the same name many times with different arguments. These kind of feature is known as Method Overloading. 25. What is Method Overriding? If we define a method in a class, and we know that a subclass might need to provide a different version of the method. When a subclass provides a different implementation of the method defined in a superclass, with the same name, arguments and return type, that is called Overriding. The implementation in the subclass overwrites the code provided in the superclass. 26. What is QOS? QOS – Quality of Service QOS can applied all over iOS as well. One can Prioritize queues, thread objects, dispatch queues and POSIX threads.By assigning the correct priority for the work, iOS apps remain quick, snappy and responsive. 27. What is User Interactive? It works happened on the main thread, Immediately in order to provide a nice user experience. e.g run at while in Drawing and Ui animations. 28. What is User Initiated? It works when the user kicks off and should yield immediate results. It Performs asynchronously that are initiated from the UI This will get the mapped in to the high Priority Global Queue. This work must be complete for the user to continue. 29. What is DeadLock? A deadlock is a situation where two different programs or processed depend on one another for completion, either because both are using. It is the situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. 30. Define Run Loop? Run loop mode is a collection of input sources and times to be monitored and the collection of run loop observes to be notified. Become an iOS Expert with Certification in 25hours 31. What is iOS? iOS (intelligent Operating System) is a mobile operating system created and developed by Apple Inc. exclusively for its hardware with the help their own creating language called Objective C and Swift 32. What is the difference between the cocoa and cocoa Touch? Cocoa includes Foundation and AppKit frameworks for developing applications that run on Mac OS X. Cocoa Touch includes Foundation and UI framework for developing applications that run on iPhone ,iPod Touch and iPad. 33. What are the different property types? strong, weak, assign,copy,retain,atomic and non atomic. 34. Explain frame and bound of a View in ios? Bound is the rectangle that relative to its own coordinate system (0,0. Frame is the rectangle that relative to the superview . 35. What are the design patterns in ios? Singleton Delegate Model View Controller Observer Facade Command Template Method 36. Types of Dispatch Queues Serial :execute one task at a time in the sequential order Concurrent: execute one or more tasks concurrently. Main dispatch queue: executes tasks on the application’s main thread. 37. key value coding(KVC) and key value observing (KVO) Key-Value-Coding (KVC) : accessing a property or value using a string. Key-Value-Observing (KVO) : observe changes to a property or value. 38. Application life cycle application:willFinishLaunchingWithOptions application:didFinishLaunchingWithOptions applicationDidBecomeActive applicationWillResignActive applicationDidEnterBackground applicationWillEnterForeground applicationWillTerminate 39. Different states of application Not running Inactive Active Background Suspended 40. View life cycle loadView loadViewIfNeeded viewDidLoad viewWillAppear viewWillLayoutSubviews viewDidLayoutSubviews viewDidAppear viewWillDisappear viewDidDisappear 41. whats is delegate and notification Delegate: Creates the relationship between the objects. It is one to one communication. Notification: These are used if an object wants to notify other objects of an event. It is one to multiple communication. 42. What is the significance of “?” in Swift? The question mark (?) is used during the declaration of a property can make a property optional. If the property does not hold a value. 43.What are the Levels of Priority in QOS? User Interactive User Initiated Utility Background 44. How to write optional value in swift ? An optional that can hold either a value or no value. Optionals are written by appending a ‘?’ 45. How to unwrap the optional value in swift ? The simplest way to unwrap an optional value is to add a ‘!’ after the optional name. This is called “force unwrapping”. 46. Use of if -let statement in swift By using if- let ,we can unwrap an optional in safe way, otherwise nil it may crash the app sometimes. 47. What is id in Objective C? id is a type of any data type. It specifies a reference to any Objective-C object . 48. What is Categories in Objective C? Categories provide the ability to add functionality to an object without changing the actual object. 49. What is Extension in swift? Extension add new functionality to an existing class, structure, enumeration, or protocol type. Extensions are similar to categories in Objective-C. 50. Define Class methods and instance methods . An instance method accessed an instance of the class A class method accessed to the class itself. 51. How to add swift file to the existing Objective C project? If you add a new swift file to the existing project xcode will ask you to add Objective-C bridging header. 52. What is auto layout and why it uses? Auto Layout is a constraint-based layout system.By using this auto layout developers to create an adaptive interface that responds appropriately to changes in screen size and device orientation. 53.How many ways constraints can create programmatically? Three ways to creating constraints programmatically : layout anchors NSLayoutConstraint class Visual Format Language. 54. Is it possible to create multiple storyboards in single project. If yes how to switch from one storyboard to another? Yes, By using segue and Storyboard Reference we can switch from one storyboard to another storyboard. 55. What is let and var in swift? Let is immutable variable or a constant that means it cannot changed where as var is mutable variable meaning that it can be changed. 56. What is IBOutlet and IBAction in ios ? Interface Builder outlet(IBOutlet) is a variable which is a reference to a UI component. Interface Builder action(IBAction) is a function which is called when a specific user interaction occurs. 57. What is bundle in ios? A bundle is a directory in the file system that contains the all executable code and related resources such as images and sounds together in one place. 58. when will use deinit in swift? deinit can be used if you need to do some action or cleanup before deallocating the object. 59. what is the responsibility of URLSession? URLSession is responsible for sending and receiving HTTP requests. 60. what are the types of URLSessionTask? URLSessionDataTask URLSessionUploadTask URLSessionDownloadTask 61. Main difference between NSURLSession and NSURLConnection NSURLConnection: when our App goes to background mode or not running mode using NSURLConnection, everything we have received or sent were lost. NSURLSession: NSURLSession gives your app the ability to perform background downloads when your app is not running or app is suspended. 62. what is JSON? JSON ( JavaScript Object Notation) is a text-based, lightweight and easy way for storing and exchanging data. 63. what is JSONSerialization ? The built in way of parsing JSON is called JSONSerialization and it can convert a JSON string into a collection of dictionaries, arrays, strings and numbers . 64. Difference between core data and sqlite? Core Data is a framework that can be used for managing an object graph. Core Data is not a database.Core Data can use a SQLite database as its persistent store, but it also has support for other persistent store types, including a binary store and an in-memory store. SQLite is a lightweight relational database. 65. Difference between Keychain and NSUserDefaults? In Keychain: If user removed the app from device the saved UserName and Password still is there. In NSUserDefaults: If user removed the app from device the saved UserName and Password also removed. 66. what is app thinning. How to reduce app size? App thinning is concept of reducing the app size while downloading.Using the below methods we can reduce the app size App Slicing Bitcode On-Demand Resource 67. what is Bundle ID? The Bundle ID is the unique string that identifies your application to the system. 68. What are the types of certificates required for developing and distributing apps? Development and distributing certificates. Development certificate: used for development Distribution certificates: used for submitting apps to app store or in house 69. what are binaries required to install the app to device? .ipa, .app 70. what are the advantages of swift over Objective C? Readability Maintenance Safer Platform Less Code & Less Legacy Speed 71. What is latest xcode version and its features? Xcode latest version is 10.1 The Dark Mode instantly Swift Source Code Editor Debugging the Tools Playground for Machine Learning Source Control 72. What are new features in ios 12 for developers ? ARKit 2 Siri Shortcuts CarPlay for Navigation Apps Health Records Natural Language 73. How to pass data between view controllers Segue, in prepareForSegue method (Forward) Delegate (Backward) Setting variable directly (Forward) 74. What is the Output of the Following Program? class Cricket { var score = 100 } var player1 = Cricket() var player2 = player1 player2.score = 200 print((player1.score),(player2.score)) 200,200 iOS Questions and Answers Pdf Download Read the full article
0 notes
for-the-user · 8 years ago
Text
In App Purchase - The Code
window.location.replace("https://blog.sebastianfromearth.com/post/20170908155926");
It has a function to check if the user's iTunes account can make payments. Meaning they have added a payment method to their itunes account:
requestProductInfo() -> SKPaymentQueue.canMakePayments()
If yes, it requests from apple, the IAP products you created in your iTunes Connect Account when submitting the app:
requestProductInfo() -> SKPaymentQueue.canMakePayments() -> productRequest.start() -> productsRequest(...)
It has a function to initiate a new purchase:
beginTransaction(...) -> .addPayment -> paymentQueue(...) -> case .Purchased -> .finishTransaction
And another to initiate the restoring of already purchased items (in the case where a user gets a new phone):
restorePurchase() -> .restoreCompletedTransactions -> paymentQueue(...) -> case .Restored -> .finishTransaction -> paymentQueueRestoreCompletedTransactionsFinished(...)
These two functions add transactions to a payment queue which completes them asynchronously and on completion you decide what to unlock in you app. It also has some functions for failed purchases or restorations and space to do something in those scenarios.
Here's the complete swift file, but note at the top, productIDs. These array elements have to be named exactly what you named them in iTunes Connect when you created them. If you make a typo, productsRequest(...) will not find it.
--- IAPHelpers.swift --- import StoreKit public var productIDs = NSSet(array: ["my.product.id1", "my.product.id2", "my.product.id3"]) public var productRequest = SKProductsRequest() public var productArray: Array = [] public var currentProductID: String! = "" public var product: SKProduct? public var transactionInProgress = false public var canMakePayments = false class IAPHelpers: NSObject, SKProductsRequestDelegate, SKPaymentTransactionObserver { func requestProductInfo() -> Bool { let result: Bool if SKPaymentQueue.canMakePayments() { productRequest = SKProductsRequest(productIdentifiers: productIDs as! Set) productRequest.delegate = self productRequest.start() print("Can perform In App Purchases.") result = true } else { print("Cannot perform In App Purchases.") result = false } return result } func productsRequest(request: SKProductsRequest, didReceiveResponse response: SKProductsResponse) { if response.products.count != 0 { print("Here come the products.") if response.invalidProductIdentifiers.count != 0 { print(response.invalidProductIdentifiers.description) } else { for product in response.products { productArray.append(product) print(productArray) } } } else { print("Waiting for the products.") } } func beginTransaction(payment: SKPayment) { if transactionInProgress == true { return } print("Beginning transaction...") SKPaymentQueue.defaultQueue().addPayment(payment) print("adding payment...") transactionInProgress = true } func restorePurchase() { if transactionInProgress == true { return } SKPaymentQueue.defaultQueue().restoreCompletedTransactions() transactionInProgress = true } func paymentQueue(queue: SKPaymentQueue, updatedTransactions transactions: [SKPaymentTransaction]) { for transaction: SKPaymentTransaction in transactions { switch transaction.transactionState { case .Purchased: if ( currentProductID == "my.product.id1" ) { print("unlock something") } if ( currentProductID == "my.product.id2" ) { print("unlock something") } if ( currentProductID == "my.product.id3" ) { print("unlock something") } SKPaymentQueue.defaultQueue().finishTransaction(transaction) transactionInProgress = false print("Transaction completed successfully.") break case .Failed: SKPaymentQueue.defaultQueue().finishTransaction(transaction) print("Transaction Failed") transactionInProgress = false break case .Restored: SKPaymentQueue.defaultQueue().finishTransaction(transaction) print("Transaction Restored") transactionInProgress = false break default: break } } } func paymentQueue(queue: SKPaymentQueue, removedTransactions transactions: [SKPaymentTransaction]) { } func paymentQueueRestoreCompletedTransactionsFinished(queue: SKPaymentQueue) { for transaction:SKPaymentTransaction in queue.transactions { if ( transaction.payment.productIdentifier == "my.product.id1" ) { print("unlock something") } if ( transaction.payment.productIdentifier == "my.product.id2" ) { print("unlock something") } if ( transaction.payment.productIdentifier == "my.product.id3" ) { print("unlock something") } } print("Purchased Transactions Restored") } func paymentQueue(queue: SKPaymentQueue, updatedDownloads downloads: [SKDownload]) { } func paymentQueue(queue: SKPaymentQueue, restoreCompletedTransactionsFailedWithError error: NSError) { } }
Including the file above won't actually do anything on it's own. You need to use it in other parts of your app. How can we do so? This code sample is from the first ViewController "Index" that gets loaded when you enter the app. We have a button which takes us to a page with a list of things to buy. The button is disabled at first, but when the view loads, it check if the user's iTunes account can make payments and if yes, enables the button.
--- Index.swift --- class Index: UIViewController { @IBOutlet weak var toPurchasesViewButton: UIButton! let paymentTransactionObserver = IAPHelpers() . . . override func viewWillAppear(animated: Bool) { if canMakePayments == true { self.toPurchasesViewButton.enabled = true } } override func viewDidLoad() { super.viewDidLoad() . . . // In App Purchase checks if canMakePayments == false { self.toPurchasesViewButton.enabled = false canMakePayments = paymentTransactionObserver.requestProductInfo() } . . . } }
Clicking the button has been set up to segue us to this other View Controller which has one button to buy something and one button to restore purchases.
--- Purchase.swift --- import StoreKit class PurchaseViewController: UIViewController { let paymentTransactionObserver = IAPHelpers() @IBOutlet weak var purchaseButton: UIButton! @IBOutlet weak var restoreButton: UIButton! @IBAction func purchaseSomething(sender: UIButton) { for product in productArray { if product.productIdentifier == "my.product.id1" { currentProductID = product.productIdentifier purchaseButton.enabled = false paymentTransactionObserver.beginTransaction(SKPayment(product: product)) } } } @IBAction func restoreSomething(sender: UIButton) { paymentTransactionObserver.restorePurchase() restoreButton.enabled = false } }
Lastly, I forgot why we have to include these here in the AppDelegate.swift file, but I just know when we don't, it doesn't work.
--- AppDelegate.swift --- import StoreKit @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? let paymentTransactionObserver = IAPHelpers() func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { // Override point for customization after application launch. SKPaymentQueue.defaultQueue().addTransactionObserver(paymentTransactionObserver) return true } . . . func applicationWillTerminate(application: UIApplication) { // Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:. SKPaymentQueue.defaultQueue().removeTransactionObserver(paymentTransactionObserver) } }
And that's all I know about Apple's In-App Purchase for now :?
0 notes
tak4hir0 · 6 years ago
Link
ブログトップ Salesforce MobileSDKのセッション管理について考えてみた はじめに みなさん、こんにちは。 走れるシステムエンジニア、溝口です。 Apple Watchが予約開始となりましたが、みなさん予約はされましたか!? 僕は即予約、アンド定時ダッシュで試着をして、Apple Watcherとしての想像と夢を膨らませておりました。 もちろん、冶金専門家が標準的な金よりも最大2倍硬くなるように開発した18Kで仕立てられた200万円超えのApple Watch Editionは華麗にスルーし、完全予定調和のSportsモデルを予約しました。 届くのが楽しみですね! Hybridアプリのセッション管理 それでは本題に参りましょう! 今回は先日執筆した「 WKWebViewとSalesforceでHybridアプリ開発」の補足的な内容となります。 前回の記事を見て気付いた方も居らっしゃると思いますが、WebViewでVisualforceのページを表示した際に、セッションの管理をどうするか、という点が考慮されていませんでした。 では、具体的に前回の記事の実装だとどうなるか見て行きましょう。 ここまでは前回と同じですね、普通にWebViewにVisualforceで実装した詳細画面が表示されています。 このままホームボタンを押して、セッション有効期間が切れるまで放置してみましょう。 ------------数時間後------------- さて、セッションが切れる時間となりました。 それでは、再度サンプルアプリを開いて見ましょう。 oh...、これはイケてないですね。 この様に、「セッションが切れた時にどうするか」という観点での実装が足りていなかった為、この様な現象が起きてしまいました。 Salesforceとの通信前にセッションのチェック 意図せずログイン画面が表示されてしまうのは、WebViewでVisualforceの画面を開いたままにした場合、「sid=」で付与しているセッションID(アクセストークン)の有効期間が切れてしまったまま、再度リクエストを投げてしまうことが原因でした。 「じゃあ、Salesforceとの通信の前に認証を行い、セッションのチェックをしよう!」ということで、前回のコードを以下の様に修正しました。 import UIKit import WebKit class DetailViewController: BaseViewController, WKNavigationDelegate, SFOAuthCoordinatorDelegate { var accountItem: AccountModel? var accountWebView: WKWebView? init(nibName: String, accountItem: AccountModel) { super.init(nibName: nibName, bundle: nil) self.accountItem = accountItem } required init(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } override func viewDidLoad() { super.viewDidLoad() SFRestAPI.sharedInstance().coordinator.delegate = self; setupDesign() setupWebView() } override func viewWillAppear(animated: Bool) { SVProgressHUD.showWithStatus("認証中") SFRestAPI.sharedInstance().coordinator.authenticate() } override func viewDidDisappear(animated: Bool) { SVProgressHUD.dismiss() } func setupDesign() { let refreshButton: UIBarButtonItem = UIBarButtonItem(barButtonSystemItem: .Refresh, target: self, action: "tapRefleshButton") self.navigationItem.rightBarButtonItem = refreshButton } func setupWebView() { accountWebView = WKWebView() self.view = accountWebView! accountWebView!.navigationDelegate = self } func createAccountRequest() NSURLRequest { let instanceUrl: String = SFRestAPI.sharedInstance().coordinator.credentials.instanceUrl.description let accessToken: String = SFRestAPI.sharedInstance().coordinator.credentials.accessToken let authUrl: String = instanceUrl + "/secur/frontdoor.jsp?sid=" + accessToken + "&retURL=" let accountUrl: String = instanceUrl + "/apex/AccountMobile?id=" + accountItem!.salesforceId! let request: NSURL = NSURL(string:authUrl + accountUrl)! let urlRequest: NSURLRequest = NSURLRequest(URL: request) return urlRequest } func webView(webView: WKWebView, didStartProvisionalNavigation navigation: WKNavigation!) { SVProgressHUD.showWithStatus("読み込み中") } func webView(webView: WKWebView, didFinishNavigation navigation: WKNavigation!) { SVProgressHUD.dismiss() self.title = accountWebView!.title } func webView(webView: WKWebView, didFailNavigation navigation: WKNavigation!, withError error: NSError) { SVProgressHUD.dismiss() } func webView(webView: WKWebView, decidePolicyForNavigationAction navigationAction: WKNavigationAction, decisionHandler: (WKNavigationActionPolicy) Void) { if (navigationAction.request.URL!.absoluteString!.hasPrefix("completed://")) { self.navigationController!.popViewControllerAnimated(true) } decisionHandler(WKNavigationActionPolicy.Allow) } func tapRefleshButton() { accountWebView!.reload() } func oauthCoordinatorDidAuthenticate(coordinator: SFOAuthCoordinator!, authInfo info: SFOAuthInfo!) { SVProgressHUD.dismiss() accountWebView!.loadRequest(createAccountRequest()) } func oauthCoordinator(coordinator: SFOAuthCoordinator!, didBeginAuthenticationWithView view: UIWebView!) {} } どこが変わったのか1つずつ見て行きましょう。 override func viewDidLoad() { super.viewDidLoad() SFRestAPI.sharedInstance().coordinator.delegate = self; setupDesign() setupWebView() } まず、ViewDidLoadでSFOAuthCoordinatorのDelegateを自身のクラスへ実装することを宣言します。 こうすることで、認証処理を行った後の処理を自身のクラス内で受け取れる様になります。 override func viewWillAppear(animated: Bool) { SVProgressHUD.showWithStatus("認証中") SFRestAPI.sharedInstance().coordinator.authenticate() } そしてViewWillAppearにて認証処理を実行するメソッドを呼び出します。 後は非同期で認証処理が走り、セッションID(アクセストークン)が切れていた場合、リフレッシュトークンにてセッションの書き換えが実行されます。 func oauthCoordinatorDidAuthenticate(coordinator: SFOAuthCoordinator!, authInfo info: SFOAuthInfo!) { SVProgressHUD.dismiss() accountWebView!.loadRequest(createAccountRequest()) } 最後に、認証が完了した際にこのメソッドが呼ばれます。 今回はこのDelegateメソッド内にSalesforceへリクエストを投げるメソッドを実装している為、必ず 認証(セッションの確認) → Salesforceへ通信 という経路を辿ることが出来ます。 最後に 如何だったでしょうか? 今回はちょっとしたTips的な内容となってしまいましたが、WebViewを使う上でセッションの管理をどう実装するのか、という部分は、モバイル開発を行う上でも重要なポイントと言えると思います。 今後はLightningなども出てきてモバイルの市場も更に活性化しそうですし、色々な点を考慮しながら楽しく開発していきたいですね!
0 notes
kkitumblrr · 7 years ago
Link
0 notes
iyarpage · 8 years ago
Text
iOS Animation Tutorial: Getting Started
This is an abridged chapter from our book iOS Animations by Tutorials, which has been completely updated for Swift 4 and iOS 11. This tutorial is presented as part of our iOS 11 Launch Party — enjoy!
Animation is a critical part of your iOS user interfaces. Animation draws the user’s attention toward things that change, and adds a ton of fun and polish to your apps UI.
Even more importantly, in an era of “flat design”, animation is one of the key ways to make your app stand apart from others.
In this tutorial, you’ll learn how to use UIView animation to do the following:
Set the stage for a cool animation.
Create move and fade animations.
Adjust the animation easing.
Reverse and repeat animations.
There’s a fair bit of material to get through, but I promise it will be a lot of fun. Are you up for the challenge?
Your First Animation
Download and open the starter project for this tutorial. Build and run your project in Xcode; you’ll see the login screen of a fictional airline app like so:
The app doesn’t do much right now; it just shows a login form with a title, two text fields, and a big friendly button at the bottom.
There’s also a nice background picture and four clouds. The clouds are already connected to outlet variables in the code named cloud1 through cloud4.
Open ViewController.swift and have a look inside. At the top of the file you’ll see all the connected outlets and class variables. Further down, there’s a bit of code in viewDidLoad() which initializes some of the UI. The project is ready for you to jump in and shake things up a bit!
Enough with the introductions — you’re undoubtedly ready to try out some code!
Your first task is to animate the form elements onto the screen when the user opens the application. Since the form is now visible when the app starts, you’ll have to move it off of the screen just before your view controller makes an appearance.
Add the following code to viewWillAppear():
heading.center.x -= view.bounds.width username.center.x -= view.bounds.width password.center.x -= view.bounds.width
This places each of the form elements outside the visible bounds of the screen, like so:
Since the code above executes before the view controller appears, it will look like those text fields were never there in the first place.
Build and run your project to make sure your fields truly appear offscreen just as you had planned:
Perfect — now you can animate those form elements back to their original locations via a delightful animation.
Add the following code to the end of viewDidAppear():
UIView.animate(withDuration: 0.5) { self.heading.center.x += self.view.bounds.width }
To animate the title into view you call the UIView class method animate(withDuration:animations:). The animation starts immediately and animates over half a second; you set the duration via the first method parameter in the code.
It’s as easy as that; all the changes you make to the view in the animations closure will be animated by UIKit.
Build and run your project; you should see the title slide neatly into place like so:
That sets the stage for you to animate in the rest of the form elements.
Since animate(withDuration:animations:) is a class method, you aren’t limited to animating just one specific view; in fact you can animate as many views as you want in your animations closure.
Add the following line to the animations closure:
self.username.center.x += self.view.bounds.width
Build and run your project again; watch as the username field slides into place:
Seeing both views animate together is quite cool, but you probably noticed that animating the two views over the same distance and with the same duration looks a bit stiff. Only kill-bots move with such absolute synchronization!
Wouldn’t it be cool if each of the elements moved independently of the others, possibly with a little bit of delay in between the animations?
First remove the line you just added that animates username:
self.username.center.x += self.view.bounds.width
Then add the following code to the bottom of viewDidAppear():
UIView.animate(withDuration: 0.5, delay: 0.3, options: [], animations: { self.username.center.x += self.view.bounds.width }, completion: nil )
The class method you use this time looks familiar, but it has a few more parameters to let you customize your animation:
withDuration: The duration of the animation.
delay: The amount of seconds UIKit will wait before it starts the animation.
options: Lets you customize a number of aspects about your animation. You’ll learn more about this parameter later on, but for now you can pass an empty array [] to mean “no special options”.
animations: The closure expression to provide your animations.
completion: A code closure to execute when the animation completes. This parameter often comes in handy when you want to perform some final cleanup tasks or chain animations one after the other.
In the code you added above you set delay to 0.3 to make the animation start just a hair later than the title animation.
Build and run your project; how does the combined animation look now?
Ah — that looks much better. Now all you need to do is animate in the password field.
Add the following code to the bottom of viewDidAppear():
UIView.animate(withDuration: 0.5, delay: 0.4, options: [], animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Here you’ve mostly mimicked the animation of the username field, just with a slightly longer delay.
Build and run your project again to see the complete animation sequence:
That’s all you need to do to animate views across the screen with a UIKit animation!
That’s just the start of it — you’ll be learning a few more awesome animation techniques in the remainder of this tutorial!
Animatable Properties
Now that you’ve seen how easy animations can be, you’re probably keen to learn how else you can animate your views.
This section will give you an overview of the animatable properties of a UIView, and then guide you through exploring these animations in your project.
Not all view properties can be animated, but all view animations, from the simplest to the most complex, can be built by animating the subset of properties on a view that do lend themselves to animation, as outlined in the section below.
Position and Size
You can animate a view’s position and frame in order to make it grow, shrink, or move around as you did in the previous section. Here are the properties you can use to modify a view’s position and size:
bounds: Animate this property to reposition the view’s content within the view’s frame.
frame: Animate this property to move and/or scale the view.
center: Animate this property when you want to move the view to a new location on screen.
Don’t forget that in Swift, several UIKit properties such as size and center are mutable . This means you can move a view vertically by changing center.y or you can shrink a view by decreasing frame.size.width.
Appearance
You can change the appearance of the view’s content by either tinting its background or making the view fully or semi-transparent.
backgroundColor: Change this property of a view to have UIKit gradually change the background color over time.
alpha: Change this property to create fade-in and fade-out effects.
Transformation
Transforms modify views in much the same way as above, since you can also adjust size and position.
transform: Modify this property within an animation block to animate the rotation, scale, and/or position of a view.
These are affine transformations under the hood, which are much more powerful and allow you to describe the scale factor or rotation angle rather than needing to provide a specific bounds or center point.
These look like pretty basic building blocks, but you’ll be surprised at the complex animation effects you’re about to encounter!
Animation Options
Looking back to your animation code, you were always passing [] in to the options parameter. options lets you customize how UIKit creates your animation. You’ve only adjusted the duration and delay of your animations, but you can have a lot more control over your animation parameters than just that.
Below is a list of options declared in the UIViewAnimationOptions set type that you can combine in different ways for use in your animations.
Repeating
You’ll first take a look at the following two animation options:
.repeat: Include this option to makes your animation loop forever.
.autoreverse: Include this option only in conjunction with .repeat; this option repeatedly plays your animation forward, then in reverse.
Modify the code that animates the password field viewDidAppear() to use the .repeat option as follows:
UIView.animate(withDuration: 0.5, delay: 0.4, options: .repeat, animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Build and run your project to see the effect of your change:
The form title and username field fly in and settle down in the center of the screen, but the password field keeps animating forever from its position offscreen.
Modify the same code you changed above to use both .repeat and .autoreverse in the options parameter as follows:
UIView.animate(withDuration: 0.5, delay: 0.4, options: [.repeat, .autoreverse], animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Note how if you want to enable more than one option you need to use the set syntax and list all options separated with a comma and enclose the list in square brackets.
Note: If you only need a single option, Swift allows you to omit the square brackets as a convenience. However, you can still include them in case you add more options in the future. That means [] for no options, [.repeat] for a single option, and [.repeat, .autorepeat] for multiple options.
Build and run your project again; this time the password field just can’t make up its mind about staying on the screen!
Animation Easing
In real life things don’t just suddenly start or stop moving. Physical objects like cars or trains slowly accelerate until they reach their target speed, and unless they hit a brick wall, they gradually slow down until they come to a complete stop at their final destination.
The image below illustrates this concept in detail:
To make your animations look more realistic, you can apply the same effect of building momentum at the beginning and slowing down before the end, known in general terms as ease-in and ease-out.
You can choose from four different easing options:
.curveLinear: This option applies no acceleration or deceleration to the animation.
.curveEaseIn: This option applies acceleration to the start of your animation.
.curveEaseOut: This option applies deceleration to the end of your animation.
.curveEaseInOut: This option applies acceleration to the start of your animation and applies deceleration to the end of your animation.
To better understand how these options add visual impact to your animation, you’ll try a few of the options in your project.
Modify the animation code for your password field once again with a new option as follows:
UIView.animate(withDuration: 0.5, delay: 0.4, options: [.repeat, .autoreverse, .curveEaseOut], animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Build and run your project; notice how smoothly the field decelerates until it reaches its rightmost position, before returning to the left side of the screen:
This looks much more natural since that’s how you expect things to move in the real world.
Now try the opposite. Ease-in the animation when the field is still outside of the screen by modifying the same code as above to change the .curveEaseOut option to .curveEaseIn as follows:
UIView.animate(withDuration: 0.5, delay: 0.4, options: [.repeat, .autoreverse, .curveEaseIn], animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Build and run your project; observe how the field jumps back from its rightmost position with robotic vigor. This looks unnatural and isn’t as visually pleasing as the previous animation.
Finally give .curveEaseInOut a try. It combines the two options you already know into one very natural looking easing. .curveEaseInOut is also the default easing function UIKit applies to your animations.
You’ve seen how the various animation options affect your project and how to make movements look smooth and natural.
Before you move on, change the options on the piece of code you’ve been playing with back to []:
UIView.animate(withDuration: 0.5, delay: 0.4, options: [], animations: { self.password.center.x += self.view.bounds.width }, completion: nil )
Where to Go From Here?
You can download the final project from this tutorial here.
Now that you know how basic animations work, you’re ready to tackle some more dazzling animation techniques.
Animating views from point A to point B? Pshaw — that’s so easy!
If you enjoyed what you learned in this tutorial, why not check out the complete iOS Animations by Tutorials book, available in our store?
Here’s a taste of what’s in the book:
Section I: View Animations: The first section of the book covers view animations in UIKit. View animations are the simplest type of animations in iOS, but still very powerful; you can easily animate almost any view element and change things such as its size, position and color.
Section II: Auto Layout: Auto Layout is becoming more common in apps, and there are some special techniques to animate views that are sized and positioned with auto layout constraints. This section provides a crash course in Auto Layout and then covers the animation techniques that play nicely with Auto Layout.
Section III: Layer Animations: Views on iOS are backed by layers, which offer a lower-level interface to the visual content of your apps. When you need more flexibility or performance, you can use layer animations rather than view animations. In this section, you’ll learn about layers and the Core Animation API.
Section IV: View Controller Transitions: Animating views and layers is impressive, but you can dial it up to 11 and animate entire view controllers! In this section, you’ll learn the techniques for transitioning between view controllers, as well as transitioning between changes in device orientations, all in the same view controller.
Section V: 3D Animations In this section, you’ll move beyond two dimensions and learn about 3D animations and effects with CATransform3D. Although Core Animation isn’t a true 3D framework, it lets you position things in 3D-space and set perspective – which leads to some very slick and impressive effects!
Section VI: Animations with UIViewPropertyAnimator: UIViewPropertyAnimator, introduced in iOS 10, helps developers create interactive, interruptible view animations. Since all APIs in UIKit just wrap that lower level functionality, there won’t be many surprises for you when looking at UIViewPropertyAnimator. This class does make certain types of view animations a little easier to create, so it is definitely worth looking into.
Section VII: Further types of animations: There are two more animation techniques that are part of Core Animation and UIKit, but don’t really fit into Sections I and III. In this section, you’ll get an impressive snowy effect working with particle emitters, and then learn about flipbook-style frame animation with UIImageView.
And to help sweeten the deal, the digital edition of the book is on sale for $49.99! But don’t wait — this sale price is only available for a limited time.
Speaking of sweet deals, be sure to check out the great prizes we’re giving away this year with the iOS 11 Launch Party, including over $9,000 in giveaways!
To enter, simply retweet this post using the #ios11launchparty hashtag by using the button below:
Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');
We hope you enjoy this update, and stay tuned for more book releases and updates!
The post iOS Animation Tutorial: Getting Started appeared first on Ray Wenderlich.
iOS Animation Tutorial: Getting Started published first on http://ift.tt/2fA8nUr
0 notes
jacob-cs · 7 years ago
Text
layer 와 autolayout을 같이 사용 작업하는 경우 유의점
original source : https://marcosantadev.com/calayer-auto-layout-swift/
Introduction
Recently, I had to work with CALayer to make some effects in an iOS Application. Unfortunately, I had some problems with the auto layout and I had to find a workaround. In this article, I propose some approaches which I tried. You can find the best one at the end of this article in “Final Comparison”.
This article is not supposed to be a guide for CALayer but it’s just an explanation of a specific scenario: a workaround for the missing auto layout in sublayers.
Happy Reading!
Content
What Is A CALayer?
What About Auto Layout?
Conclusion
Update In viewDidLayoutSubviews/layoutSubviews
Update With KVO
Custom UIView
Final Comparison
What Is A CALayer?
We can consider CALayer a graphic context of any UIView object where we can add corners radius, borders, shadows and so on. Then, we can also apply animations to some layer properties to get nice effects—like a corner radius animation when we highlight a button.
Core animation provides several layer types by default, the main ones are:
CALayer: it’s the base class which we can extend to create our custom layers.
CATextLayer: it’s a layer which provides the possibility to render a text from a string with the possibility to set some attributes.
CAShapeLayer: it’s a layer which provides the possibility to draw shapes using a CGPath object.
CAGradientLayer: it’s a layer which provides the possibility to create a color gradient using an array of CGColorRef.
What About Auto Layout?
As we saw previously, a layer is a context of an UIView object. It means that any UIView object has a main layer which we can use to change its corner radius, border and so on. We don’t need to set any constraint to this main layer, since it fills automatically its view—we cannot change the frame of this layer manually since it will always fill its view.
At this point, you may be wondering: why should we bother about constraints if the main layer doesn’t need auto layout? Well, let’s consider that we want to use a sublayer in our view to add an additional text, shape or gradient with a specific frame. Unfortunately, iOS doesn’t allow the usage of constraints for sublayers. This means that we need a workaround to replace the missing auto layout.
We can use 3 approaches to achieve our goals. To simplify our code, we use a gradient layer which fills the parent view.
Please note that we can add a sublayer with whatever frame we want. We have just to set the CALayer propertyframe. In this example the sublayer fills its parent view to keep the example easy to understand.
Update In viewDidLayoutSubviews/layoutSubviews
In this approach, we add the new gradient layer and set its frame to the parent view bounds. Then, to keep the frame updated we must use the callback which says that the view layout has been update. If we are inside an UIViewController we can use viewDidLayoutSubviews, otherwise layoutSubviews inside UIView.
In the following example, we use the implementation inside an UIViewController:
class ViewController: UIViewController {
  let gradientLayer: CAGradientLayer = {
      let layer = CAGradientLayer()
      layer.colors = [
          UIColor.red.cgColor,
          UIColor.green.cgColor
      ]
      return layer
  }()
  override func viewDidLoad() {
      super.viewDidLoad()
      view.layer.addSublayer(gradientLayer)
      gradientLayer.frame = view.bounds
  }
  override func viewDidLayoutSubviews() {
      super.viewDidLayoutSubviews()
      gradientLayer.frame = view.bounds
  }
}
Update With KVO
The second approach is using KVO to observe the parent view bounds. When bounds changes, we manually update the layer frame to fill its parent view:
class ViewController: UIViewController {
  let gradientLayer: CAGradientLayer = {
      let layer = CAGradientLayer()
      layer.colors = [
          UIColor.yellow.cgColor,
          UIColor.brown.cgColor
      ]
      return layer
  }()
  override func viewDidLoad() {
      super.viewDidLoad()
      view.layer.addSublayer(gradientLayer)
      gradientLayer.frame = view.bounds
  }
  override func viewWillAppear(_ animated: Bool) {
      super.viewWillAppear(animated)
      view.addObserver(self, forKeyPath: #keyPath(UIView.bounds), options: .new, context: nil)
  }
  override func viewWillDisappear(_ animated: Bool) {
      super.viewWillDisappear(animated)
      view.removeObserver(self, forKeyPath: #keyPath(UIView.bounds))
  }
  override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
      if let objectView = object as? UIView,
          objectView === view,
          keyPath == #keyPath(UIView.bounds) {
          gradientLayer.frame = objectView.bounds
      }
  }
}
Remember to remove the observer when you use KVO.
Custom UIView
The last approach is using a custom UIView.
We know that the main layer of any UIView fills automatically its view, this means that, if we have a view with a specific frame, we are sure that we have also its main layer at that frame. We perfectly know that when we apply the constraints to a subview, the latter has the frame automatically updated to satisfy the constraints. At this point, we can deduce that, if we set proper constraints to a subview, we’ll have also its main layer at the specific frame since the layer is always at the same frame of its view.
The problem of this approach is that we must replace every sublayers with a custom UIView. On the other hand, in this way we can take advantage of the constraints used for this custom view which will be applied automatically also to its main layer.
For this approach, we must create a custom UIView class:
class LayerContainerView: UIView
We said that the view has a main layer which is a CALayer by default. For our custom view we want a main layer of type CAGradientLayer, therefore we must override the layerClass property which says the type of the main layer:
override public class var layerClass: Swift.AnyClass {
  return CAGradientLayer.self
}
If we don’t override it, the main layer will be always CALayer without any way to change its type.
And, finally, we can set our gradient in awakeFromNib:
override func awakeFromNib() {
  super.awakeFromNib()
  guard let gradientLayer = self.layer as? CAGradientLayer else { return }
  gradientLayer.colors = [
      UIColor.blue.cgColor,
      UIColor.cyan.cgColor
  ]
}
At the end, the class should be like this:
class LayerContainerView: UIView {
  override public class var layerClass: Swift.AnyClass {
      return CAGradientLayer.self
  }
  override func awakeFromNib() {
      super.awakeFromNib()
      guard let gradientLayer = self.layer as? CAGradientLayer else { return }
      gradientLayer.colors = [
          UIColor.blue.cgColor,
          UIColor.cyan.cgColor
      ]
  }
}
Final Comparison
At this point, we have 3 valid workarounds which we can use. The next step is testing the performance to understand what is the best approach. To compare them, we can create a sample project where we use all of them and check how they behave.
If you don’t want to test by yourself, you can watch the following video with the final result:
In this sample app, I added three sublayers and rotated the simulator—with slow animations enabled—to check how the layers behave when the parent view bounds changes.
Spoiler:
As we can notice in the video, the approach with the custom UIView performs better than the other ones. The reason is that we are relying on the auto layout applied to the view instead of updating the sublayer frame manually. Therefore, creating a custom UIView is an acceptable trade-off to obtain the best result so far.
Conclusion
We have just seen some workarounds for the auto layout and the best approach to use. Unfortunately, it’s just a workaround and not an official approach. This means that we may find an approach which may be better than the custom UIView. For this reason, if you have a better approach, feel free to leave a comment with a description of your solution. I would be more than happy to discuss alternatives. Thank you.
1 note · View note
usirin-blog · 11 years ago
Text
Reload Data of UITableView After going 'Back'
Let's say you are editing data on phone, and after you are done with your edit you are clicking the Save button, and programmatically returning back to the ViewController you were in before with:
[self.navigationController popViewControllerAnimated:YES];
But, if you do this, and you are depending on the changed data, It won't change until you make changes to the root ViewController's viewWillAppear:animated method.
I was always using viewDidLoad instead of viewWillAppear. And most of the times it gave me the results that I expect.
But, the thing about viewDidLoad is that it's being called just once, when the ViewController is being initialized for the first time. But viewWillAppear will be called every time you open that ViewController.
So, as a solution to reload data every time when a ViewController shows up, you can use:
- (void)viewWillAppear:(BOOL)animated { [self.tableView reloadData]; [super viewWillAppear:animated]; }
0 notes
saikyoapp · 12 years ago
Text
さらにuitableviewに関する情報
http://d.hatena.ne.jp/glass-_-onion/20090324/1237864499
とても勉強になります。
行選択、解除のうまく行ってない時の違和感はけっこう深刻。
それがちゃんとできていないとReject食らわすAppleもまた素晴らしいな。
そうか。以前、カレンダー一覧のviewだけが、タイトルバーの戻るボタンで戻ってくると落ちるようになってしまったのですが、実装しておくべきメソッドがあったということですね。
viewWillAppear viewDidAppeared
などは、お作法として必須だったんですね~。
0 notes
siva3155 · 6 years ago
Text
300+ TOP iOS Interview Questions and Answers
iOS Interview Questions for freshers experienced :-
1.What is latest iOS version? The latest version of iOS is 13. 2. What is the output of the following Program? let numbers = let numberSum = numbers.reduce(0, { $0 + $1}) 10 3. Conversion of Error to optional Value is used as a method to handle Errors in Swift (True/False) True 4. Structures can be inherited(True/False) Flase 5. What is the Output of Following Program var randomly Array: = print (randomArray) Compilation Error (1 is an Int not object) 6. What is the Output Of Following Program class FirstClass { func doFunction() { print(“I am superclass”) }} class SecondClass: Firstclass { override func doFunction() { print(“I am subclass”) }} let object = SecondClass() object.doFunction() I am subclass 7. What is ARC? Automatic Reference Counting 8. What is ARWorld Map? An ARWorldMap object contains a snapshot of all the spatial mapping information that ARKit uses to locate the user’s device in real-world space. 9. How shared AR Experience is Created? Once the AR world map created from once device is transferred to the other device using P2P connectivity or some other reliable source, the other devices can also have the same AR experience similar to the first device. To create an ongoing shared AR experience, like placing an AR object, When one user taps in the scene, the app creates an anchor and adds it to the local ARSession, instead of sending the whole map we can serializes that ARAnchor using Data and sends it to other devices in the multipeer session. 10. What are the main UITableViewDataSource methods that are needed to display data in table view? numberOfSection, numberOfRowsInSection, cellForRowAtIndexPath
Tumblr media
iOS Interview Questions 11. What are the two basic things that are needed to create an NSFetchRequest in CoreData? EntityName, SortDescriptor/NSPredicate 12. What are Tuples in Swift? Tuples are Temporary container for Multiple Values. It is a comma-separated list of types, enclosed in parentheses. In other words, a tuple groups multiple values into a single compound value. 13. What are the Control Transfer Statements in swift? break, continue, fallthrough, return, throw 14. What are closures and write an example program for closure? Closures are self-contained blocks of functionality that can be passed around and used in code. var addClosure = { (a: Int, b: Int) in return a + b } let result = addClosure(1,2) print(result) 15. Briefly explain reference types and value types with examples? Classes are reference types and Structures are value types. When a class is assigned to variable and copied only the reference to the original value are passed and so any consecutive changes to the assigned values will also affect the original value. But in struct when it is assigned to another variable a copy is generated which will not have any effect on the original value when changed. 16. What are the features of Swift ? Variables are always initialized before use. Memory is managed automatically. Arrays and integers are checked for overflow. Switch function can be used instead of using “if” statement. It eliminates the classes that are in unsafe mode. 17. What is the significance of “?” in Swift? The question mark (?) is used during the declaration of a property can make a property optional. 18. What Is Initialization ? This process involves setting an initial value for each stored property on that instance and performing any other setup or it is required before the new instance is ready for use. e.g.: keyword: init() { // code initialization here. } 19. What is NSURL Session? NSURL Session is a replacement for NSURL Connection, and similarly it is both a distinct class and also a group of related APIs. 20. What are the Session Configuration? NSURL Session Configuration provides a few factory methods to create your session configuration. The methods are: Default Configuration – provides anccess to the global singleton storage and settings. Ephemeral Session Configuration – a private, in-memory only storage Background Session Configuration – out-of-process configuration that is keyed to the identifier string. 21. How to define kSomeConstant ? e.g.:let kSomeConstant: Int = 80 It is Implicitly defined as an integer. If you want to be more specific you can specify it like above e.g.. 22. Define hash value vs raw value ? HashValue:- If you had the enum as we had declared earlier without the type, there is no rawValue available but instead you get a member called hashValue. RawValue:- The rawValue on the other hand is a type value that you can assign to the enum members. 23. Define Static Binding and Dynamic Binding? Static Binding: It is resolved at “Compile time” Method overloading is an example of static binding. Dynamic Binding: It is virtual binding resolved at a “Run Time”. Method overriding is an example of Dynamic Binding. 24. What is Method Overloading? It defines a method with the same name many times with different arguments. These kind of feature is known as Method Overloading. 25. What is Method Overriding? If we define a method in a class, and we know that a subclass might need to provide a different version of the method. When a subclass provides a different implementation of the method defined in a superclass, with the same name, arguments and return type, that is called Overriding. The implementation in the subclass overwrites the code provided in the superclass. 26. What is QOS? QOS – Quality of Service QOS can applied all over iOS as well. One can Prioritize queues, thread objects, dispatch queues and POSIX threads.By assigning the correct priority for the work, iOS apps remain quick, snappy and responsive. 27. What is User Interactive? It works happened on the main thread, Immediately in order to provide a nice user experience. e.g run at while in Drawing and Ui animations. 28. What is User Initiated? It works when the user kicks off and should yield immediate results. It Performs asynchronously that are initiated from the UI This will get the mapped in to the high Priority Global Queue. This work must be complete for the user to continue. 29. What is DeadLock? A deadlock is a situation where two different programs or processed depend on one another for completion, either because both are using. It is the situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. 30. Define Run Loop? Run loop mode is a collection of input sources and times to be monitored and the collection of run loop observes to be notified. Become an iOS Expert with Certification in 25hours 31. What is iOS? iOS (intelligent Operating System) is a mobile operating system created and developed by Apple Inc. exclusively for its hardware with the help their own creating language called Objective C and Swift 32. What is the difference between the cocoa and cocoa Touch? Cocoa includes Foundation and AppKit frameworks for developing applications that run on Mac OS X. Cocoa Touch includes Foundation and UI framework for developing applications that run on iPhone ,iPod Touch and iPad. 33. What are the different property types? strong, weak, assign,copy,retain,atomic and non atomic. 34. Explain frame and bound of a View in ios? Bound is the rectangle that relative to its own coordinate system (0,0. Frame is the rectangle that relative to the superview . 35. What are the design patterns in ios? Singleton Delegate Model View Controller Observer Facade Command Template Method 36. Types of Dispatch Queues Serial :execute one task at a time in the sequential order Concurrent: execute one or more tasks concurrently. Main dispatch queue: executes tasks on the application’s main thread. 37. key value coding(KVC) and key value observing (KVO) Key-Value-Coding (KVC) : accessing a property or value using a string. Key-Value-Observing (KVO) : observe changes to a property or value. 38. Application life cycle application:willFinishLaunchingWithOptions application:didFinishLaunchingWithOptions applicationDidBecomeActive applicationWillResignActive applicationDidEnterBackground applicationWillEnterForeground applicationWillTerminate 39. Different states of application Not running Inactive Active Background Suspended 40. View life cycle loadView loadViewIfNeeded viewDidLoad viewWillAppear viewWillLayoutSubviews viewDidLayoutSubviews viewDidAppear viewWillDisappear viewDidDisappear 41. whats is delegate and notification Delegate: Creates the relationship between the objects. It is one to one communication. Notification: These are used if an object wants to notify other objects of an event. It is one to multiple communication. 42. What is the significance of “?” in Swift? The question mark (?) is used during the declaration of a property can make a property optional. If the property does not hold a value. 43.What are the Levels of Priority in QOS? User Interactive User Initiated Utility Background 44. How to write optional value in swift ? An optional that can hold either a value or no value. Optionals are written by appending a ‘?’ 45. How to unwrap the optional value in swift ? The simplest way to unwrap an optional value is to add a ‘!’ after the optional name. This is called “force unwrapping”. 46. Use of if -let statement in swift By using if- let ,we can unwrap an optional in safe way, otherwise nil it may crash the app sometimes. 47. What is id in Objective C? id is a type of any data type. It specifies a reference to any Objective-C object . 48. What is Categories in Objective C? Categories provide the ability to add functionality to an object without changing the actual object. 49. What is Extension in swift? Extension add new functionality to an existing class, structure, enumeration, or protocol type. Extensions are similar to categories in Objective-C. 50. Define Class methods and instance methods . An instance method accessed an instance of the class A class method accessed to the class itself. 51. How to add swift file to the existing Objective C project? If you add a new swift file to the existing project xcode will ask you to add Objective-C bridging header. 52. What is auto layout and why it uses? Auto Layout is a constraint-based layout system.By using this auto layout developers to create an adaptive interface that responds appropriately to changes in screen size and device orientation. 53.How many ways constraints can create programmatically? Three ways to creating constraints programmatically : layout anchors NSLayoutConstraint class Visual Format Language. 54. Is it possible to create multiple storyboards in single project. If yes how to switch from one storyboard to another? Yes, By using segue and Storyboard Reference we can switch from one storyboard to another storyboard. 55. What is let and var in swift? Let is immutable variable or a constant that means it cannot changed where as var is mutable variable meaning that it can be changed. 56. What is IBOutlet and IBAction in ios ? Interface Builder outlet(IBOutlet) is a variable which is a reference to a UI component. Interface Builder action(IBAction) is a function which is called when a specific user interaction occurs. 57. What is bundle in ios? A bundle is a directory in the file system that contains the all executable code and related resources such as images and sounds together in one place. 58. when will use deinit in swift? deinit can be used if you need to do some action or cleanup before deallocating the object. 59. what is the responsibility of URLSession? URLSession is responsible for sending and receiving HTTP requests. 60. what are the types of URLSessionTask? URLSessionDataTask URLSessionUploadTask URLSessionDownloadTask 61. Main difference between NSURLSession and NSURLConnection NSURLConnection: when our App goes to background mode or not running mode using NSURLConnection, everything we have received or sent were lost. NSURLSession: NSURLSession gives your app the ability to perform background downloads when your app is not running or app is suspended. 62. what is JSON? JSON ( JavaScript Object Notation) is a text-based, lightweight and easy way for storing and exchanging data. 63. what is JSONSerialization ? The built in way of parsing JSON is called JSONSerialization and it can convert a JSON string into a collection of dictionaries, arrays, strings and numbers . 64. Difference between core data and sqlite? Core Data is a framework that can be used for managing an object graph. Core Data is not a database.Core Data can use a SQLite database as its persistent store, but it also has support for other persistent store types, including a binary store and an in-memory store. SQLite is a lightweight relational database. 65. Difference between Keychain and NSUserDefaults? In Keychain: If user removed the app from device the saved UserName and Password still is there. In NSUserDefaults: If user removed the app from device the saved UserName and Password also removed. 66. what is app thinning. How to reduce app size? App thinning is concept of reducing the app size while downloading.Using the below methods we can reduce the app size App Slicing Bitcode On-Demand Resource 67. what is Bundle ID? The Bundle ID is the unique string that identifies your application to the system. 68. What are the types of certificates required for developing and distributing apps? Development and distributing certificates. Development certificate: used for development Distribution certificates: used for submitting apps to app store or in house 69. what are binaries required to install the app to device? .ipa, .app 70. what are the advantages of swift over Objective C? Readability Maintenance Safer Platform Less Code & Less Legacy Speed 71. What is latest xcode version and its features? Xcode latest version is 10.1 The Dark Mode instantly Swift Source Code Editor Debugging the Tools Playground for Machine Learning Source Control 72. What are new features in ios 12 for developers ? ARKit 2 Siri Shortcuts CarPlay for Navigation Apps Health Records Natural Language 73. How to pass data between view controllers Segue, in prepareForSegue method (Forward) Delegate (Backward) Setting variable directly (Forward) 74. What is the Output of the Following Program? class Cricket { var score = 100 } var player1 = Cricket() var player2 = player1 player2.score = 200 print((player1.score),(player2.score)) 200,200 iOS Questions and Answers Pdf Download Read the full article
0 notes
iyarpage · 8 years ago
Text
Augmented Reality and ARKit Tutorial
This is an abridged chapter from our book 2D Apple Games by Tutorials, which has been completely updated for Swift 4 and iOS 11. This tutorial is presented as part of our iOS 11 Launch Party — enjoy!
Augmented Reality. The phrase alone conjures up images of bionic limbs and holodecks. Wouldn’t it be cool if you could turn your iOS device into a mini-holodeck? Well, thanks to Apple’s exciting new framework named ARKit, now you can!
Augmented reality (AR), as its name indicates, adds computer-generated objects to reality. In short, you view the world through a camera and interact with virtual 2D or 3D objects inside that view.
With 2D AR, you can add overlays or signposts, which respond to geographic location or visual features in real-time. Or with 3D AR, you can visualize how a piece of furniture might look inside your living-room without ever leaving your home.
AR has been around for a while. For example, Snapchat’s 3D filters recognize facial features and wrap 3D objects around your head. But in order for that to work, you needed some hard-core math. You also needed hardware that can track in real-world space. With ARKit, Apple has provided both.
To demonstrate the differences between 2D and 3D ARKit, here is an example of Apple’s ARKit/SceneKit 3D template — a spaceship landing in my garden:
Now take a look at the 2D ARKit/SpriteKit template. With this template, you’re able to tap the screen and anchor a Space Invader 0.2 meters directly in front of your device’s camera.
In this tutorial, you’re going to step up the Pest Control game from 2D Apple Games by Tutorials and convert it into an immersive augmented reality first-person shooter named ARniegeddon, bringing bugs and firebugs into your very own home with the power of ARKit.
Getting Started
ARniegeddon is a first-person shooter. You’ll add bugs to the view and shoot them from a distance. Firebugs, just as they were in Pest Control, are a bit tougher to destroy. You’ll have to locate bug spray and pick it up by scooping your phone through the bug spray canister. You’ll then be able to aim at a firebug and take it down.
You’ll start from a SpriteKit template, and you’ll see exactly what goes into making an AR game using ARKit.
This is what the game will look like when you’ve completed this tutorial:
Download the starter project for this tutorial, and run the game on your device. The starter project is simply the Hello World code created from a new SpriteKit template. I’ve added the image and sound assets to it. There’s also a Types.swift file that has an enum for the sounds you’ll be using.
Note: To run the game on your device, you must select a development team under General\Signing of the ARniegeddon target.
Requirements
There are a few requirements for using ARKit. So before you get too far into this tutorial, make sure you have all of these:
An A9 or later processor. Only the iPhone 6s and up, the 2017 iPads and the iPad Pros can run ARKit.
Space. You’ll need plenty of space. One of the great things about AR is that you can develop games like Pokémon Go and encourage people to leave their homes — and play games in the park. To play ARniegeddon, you’ll need a clear space so you can capture bugs without falling over your furniture.
Contrast. If you’re in a dark room, you wouldn’t expect to see much, would you? With augmented reality, contrast is the key. So, if your room has white reflective tiles, white furniture and white walls, or is too dark, things won’t work too well; the camera needs contrast in order to distinguish surfaces and distances of objects.
How AR Works
Using a process called Visual Inertial Odometry (VIO), ARKit uses your device’s motion sensors, combined with visual information from the camera, to track the real world. Some clever math takes this tracking information and maps features in the real 3D world to your 2D screen.
When your game first starts, the device sets its initial position and orientation. The camera determines features of possible objects in its frame. When you move the device, there’s a new position and orientation. Because you’ve moved, the camera sees the objects slightly differently. The device now has enough information to triangulate on a feature and can work out the distance of an object. As you move, the device constantly refines the information and can gradually work out where there are horizontal surfaces.
Note: At the time of this writing, vertical surfaces are not calculated.
In addition to all this tracking, the sensors can examine the amount of available light and apply the same lighting to the AR objects within the scene.
Rendering the View
The first thing you’ll change is the view itself. When using SpriteKit with ARKit, your main view will be an ARSKView. This is a subclass of SKView that renders live video captured by the camera in the view.
In Main.storyboard, in the view hierarchy, select View, which is listed under Game View Controller. In the Identity Inspector, change the class to ARSKView.
In GameViewController.swift, replace all the imports at the top of the file with this:
import ARKit
Next, add a property for the view to GameViewController:
var sceneView: ARSKView!
Then, change this:
if let view = self.view as! SKView? {
…to this:
if let view = self.view as? ARSKView { sceneView = view
Here you get the loaded view as an ARSKView — matching your change in Main.storyboard — and you set sceneView to this view. Now the main view is set to display the camera video feed.
World Tracking with Sessions
You set up tracking by starting a session on an ARSKView. The configuration class ARWorldTrackingConfiguration tracks orientation and position of the device. In the configuration, you can specify whether or not to detect horizontal surfaces and also turn off lighting estimations.
Add these two methods to GameViewController:
override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARWorldTrackingConfiguration() sceneView.session.run(configuration) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) sceneView.session.pause() }
Here you start the session when the view appears and pause the session when the view disappears.
If you run the game at this time, you’ll get a crash. That’s because the app is trying to access the camera without permission.
Open Info.plist, control-click Information Property List, and choose Add Row.
Add the key NSCameraUsageDescription.
Double-click in the value field, and add the following description, which explains to the user how the game will use the camera:
For an immersive augmented reality experience, ARniegeddon requires access to the camera
Note: If you want to restrict your game to devices that support ARKit, find the UIRequiredDeviceCapabilities section in Info.plist (that’s the Required device capabilities section), and add arkit as a string entry in the array.
Build and run the game. You’ll get a notification requesting access to the camera, with the text indicating why access is required. When you do, tap OK.
The template’s GameScene code still runs, but with the camera view in the background. It’s rendering to the ARSKView that you set up earlier.
Note: When developing for augmented reality, you’re often moving around and generally don’t want the device to be tethered to the computer. Wireless development, new in Xcode 9, assists greatly with this. You can set this up under Window\Devices and Simulators.
Respond to Session Events
ARSKView’s session has delegate methods for certain events. For example, you’ll want to know if the session failed. Perhaps the user denied access to the camera, or she could be running the game on a device that doesn’t support AR. You need to address these issues.
In GameViewController.swift, add an extension for the delegate methods with placeholder error messages:
extension GameViewController: ARSKViewDelegate { func session(_ session: ARSession, didFailWithError error: Error) { print("Session Failed - probably due to lack of camera access") } func sessionWasInterrupted(_ session: ARSession) { print("Session interrupted") } func sessionInterruptionEnded(_ session: ARSession) { print("Session resumed") sceneView.session.run(session.configuration!, options: [.resetTracking, .removeExistingAnchors]) } }
session(_:didFailWithError:): will execute when the view can’t create a session. This generally means that to be able to use the game, the user will have to allow access to the camera through the Settings app. This is a good spot to display an appropriate dialog.
sessionWasInterrupted(_:): means that the app is now in the background. The user may have pressed the home button or received a phone call.
sessionInterruptionEnded(_:): means that play is back on again. The camera won’t be in exactly the same orientation or position so you reset tracking and anchors. In the challenge at the end of the tutorial, you’ll restart the game.
Next, replace viewDidLoad() with this code:
override func viewDidLoad() { super.viewDidLoad() if let view = self.view as? ARSKView { sceneView = view sceneView!.delegate = self let scene = GameScene(size: view.bounds.size) scene.scaleMode = .resizeFill scene.anchorPoint = CGPoint(x: 0.5, y: 0.5) view.presentScene(scene) view.showsFPS = true view.showsNodeCount = true } }
Here you set the view’s delegate and initialize GameScene directly, instead of through the scene file.
The Current Frame, Camera and Anchors
You’ve now set up your ARSKView with a session so the camera information will render into the view.
For every frame, the session will capture the image and tracking information into an ARFrame object, named currentFrame. This ARFrame object has a camera which holds positional information about the frame, along with a list of anchors.
These anchors are stationary tracked positions within the scene. Whenever you add an anchor, the scene view will execute a delegate method view(_:nodeFor:) and attach an SKNode to the anchor. When you add the game’s bug nodes to the scene, you’ll attach the bug nodes to these anchors.
Adding Bugs to the Scene
Now you’re ready to add bugs to the game scene.
First, you’ll remove all the template code. Delete GameScene.sks and Actions.sks since you won’t need them anymore.
In GameScene.swift, remove all the code in GameScene, leaving you with an empty class.
class GameScene: SKScene { }
Replace the imports at the top with:
import ARKit
To get acquainted with ARKit, and possibly help with your entomophobia, you’re going to place a bug just in front of you.
Create a convenience property to return the scene’s view as an ARSKView:
var sceneView: ARSKView { return view as! ARSKView }
Before you add the bug to your AR world, you need to make sure the AR session is ready. An AR session takes time to set up and configure itself.
First, create a property so you can check whether you added your AR nodes to the game world:
var isWorldSetUp = false
You’ll load the bug once — only if isWorldSetUp is false.
Add the following method:
private func setUpWorld() { guard let currentFrame = sceneView.session.currentFrame else { return } isWorldSetUp = true }
Here you check whether the session has an initialized currentFrame. If the session doesn’t have a currentFrame, then you’ll have to try again later.
update(_:) is called every frame, so you can attempt to call the method inside there.
Override update(_:):
override func update(_ currentTime: TimeInterval) { if !isWorldSetUp { setUpWorld() } }
Doing it this way, you only run the set up code once, and only when the session is ready.
Next you’ll create an anchor 0.3 meters in front of the camera, and you’ll attach a bug to this anchor in the view’s delegate.
But first there’s some math-y stuff to explain. Don’t worry — I’ll be gentle!
A Brief Introduction to 3D Math
Even though your game is a 2D SpriteKit game, you’re looking through the camera into a 3D world. Instead of setting position and rotation properties of an object, you set values in a 3D transformation matrix.
You may be a Neo-phyte to 3D (no more Matrix jokes, I promise!), so I’ll briefly explain.
A matrix is made up of rows and columns. You’ll be using four-dimensional matrices, which have four rows and four columns.
Both ARCamera and ARAnchor have a transform property, which is a four-dimensional matrix holding rotation, scaling and translation information. The current frame calculates ARCamera’s transform property, but you’ll adjust the translation element of the ARAnchor matrix directly.
The magic of matrices — and why they are used everywhere in 3D — is that you can create a transform matrix with rotation, scaling and translation information and multiply it by another matrix with different information. The result is then a new position in 3D space relative to an origin position.
Add this after the guard statement in setUpWorld():
var translation = matrix_identity_float4x4
Here you create a four-dimensional identity matrix. When you multiply any matrix by an identity matrix, the result is the same matrix. For example, when you multiply any number by 1, the result is the same number. 1 is actually a one-dimensional identity matrix. The origin’s transform matrix is an identity matrix. So you always set a matrix to identity before adding positional information to it.
This is what the identity matrix looks like:
The last column consists of (x, y, z, 1) and is where you set translation values.
Add this to setUpWorld(), right after the previous line:
translation.columns.3.z = -0.3
This is what the translation matrix looks like now:
Rotation and scaling use the first three columns and are more complex. They’re beyond the scope of this tutorial, and in ARniegeddon you won’t need them.
Continue adding to your code in setUpWorld():
let transform = currentFrame.camera.transform * translation
Here you multiply the transform matrix of the current frame’s camera by your translation matrix. This results in a new transform matrix. When you create an anchor using this new matrix, ARKit will place the anchor at the correct position in 3D space relative to the camera.
Note: To learn more about matrices and linear algebra, take a look at 3Blue1Brown’s video series Essence of linear algebra at http://bit.ly/2bKj1AF
Now, add the anchor to the scene using the new transformation matrix. Continue adding this code to setUpWorld():
let anchor = ARAnchor(transform: transform) sceneView.session.add(anchor: anchor)
Here you add an anchor to the session. The anchor is now a permanent feature in your 3D world (until you remove it). Each frame tracks this anchor and recalculates the transformation matrices of the anchors and the camera using the device’s new position and orientation.
When you add an anchor, the session calls sceneView’s delegate method view(_:nodeFor:) to find out what sort of SKNode you want to attach to this anchor.
Next, you’re going to attach a bug.
In GameViewController.swift, add this delegate method to GameViewController’s ARSKViewDelegate extension:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { let bug = SKSpriteNode(imageNamed: "bug") bug.name = "bug" return bug }
Hold up your phone and build and run. And I mean… RUN! Your space has been invaded!
Move your phone around, and the bug will stay where it’s anchored. The tracking becomes more effective the more information you give the phone, so move your phone around a bit to give it some position and orientation updates.
Notice that whichever way you turn the camera, the bug faces you. This is called a billboard, which is a technique used in many 3D games as a cheap way of adding elements such as trees and grass to a scene. Simply add a 2D object to a 3D scene and make sure that it’s always facing the viewer.
If you want to be able to walk around the bug and see what it looks like from behind, you’ll have to model the bug in a 3D app such as Blender, and use ARKit with either SceneKit or Metal.
Note: You can find out more about ARKit and SceneKit in our book iOS 11 by Tutorials, available here: http://bit.ly/2wcpx07.
Light Estimation
If you’re in a darkened room, then your bug will be lit up like a firefly. Bring your hand to gradually cover the camera and see the bug shine. Luckily, ARKit has light estimation so that your bug can lurk creepily in dark corners.
In GameScene.swift, add this to the end of update(_:):
// 1 guard let currentFrame = sceneView.session.currentFrame, let lightEstimate = currentFrame.lightEstimate else { return } // 2 let neutralIntensity: CGFloat = 1000 let ambientIntensity = min(lightEstimate.ambientIntensity, neutralIntensity) let blendFactor = 1 - ambientIntensity / neutralIntensity // 3 for node in children { if let bug = node as? SKSpriteNode { bug.color = .black bug.colorBlendFactor = blendFactor } }
Here’s the breakdown of this code:
You retrieve the light estimate from the session’s current frame.
The measure of light is lumens, and 1000 lumens is a fairly bright light. Using the light estimate’s intensity of ambient light in the scene, you calculate a blend factor between 0 and 1, where 0 will be the brightest.
Using this blend factor, you calculate how much black should tint the bugs.
As you pan about the room, the device will calculate available light. When there’s not much light, the bug will be shaded. Test this by holding your hand in front of the camera at different distances.
Shooting Bugs
Unless you’re an acrobat, these bugs are a little difficult to stomp on. You’re going to set up your game as a first-person shooter, very similar to the original Doom.
In GameScene.swift, add a new property to GameScene:
var sight: SKSpriteNode!
Override didMove(to:):
override func didMove(to view: SKView) { sight = SKSpriteNode(imageNamed: "sight") addChild(sight) }
This adds a sight to the center of the screen so you can aim at the bugs.
You’ll fire by touching the screen. Still in GameScene, override touchesBegan(_:with:) as follows:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let location = sight.position let hitNodes = nodes(at: location) }
Here you retrieve an array of all the nodes that intersect the same xy location as the sight. Although ARAnchors are in 3D, SKNodes are still in 2D. ARKit very cleverly calculates a 2D position and scale for the SKNode from the 3D information.
You’ll now find out if any of these nodes are a bug, and if they are, retrieve the first one. Add this to the end of touchesBegan(_:with:)
var hitBug: SKNode? for node in hitNodes { if node.name == "bug" { hitBug = node break } }
Here you cycle through the hitNodes array and find out if any of the nodes in the array are bugs. hitBug now contains the first bug hit, if any.
You’ll need a couple of sounds to make the experience more realistic. The sounds are defined in Sounds in Types.swift; they are all ready for you to use.
Continue by adding this code to the end of the same method:
run(Sounds.fire) if let hitBug = hitBug, let anchor = sceneView.anchor(for: hitBug) { let action = SKAction.run { self.sceneView.session.remove(anchor: anchor) } let group = SKAction.group([Sounds.hit, action]) let sequence = [SKAction.wait(forDuration: 0.3), group] hitBug.run(SKAction.sequence(sequence)) }
You play a sound to indicate you’ve fired your weapon. If you do hit a bug, then play the hit sound after a short delay to indicate the bug is some distance away. Then remove the anchor for the node, which will also remove the bug node itself.
Build and run, and kill your first bug!
Level Design
Of course, a game with one bug in it isn’t much of a game. In Pest Control, you edited tile maps in the scene editor to specify the positions of your bugs. You’ll be doing something similar here, but you’ll directly add SKSpriteNodes to a scene in the scene editor.
Although ARniegeddon is fully immersive, you’ll design the level as top down. You’ll lay out the bugs in the scene as if you are a god-like being in the sky, looking down at your earthly body. When you come to play the game, you’ll be at the center of the world, and the bugs will be all around you.
Create a new SpriteKit Scene named Level1.sks. Change the scene size to Width: 400, Height: 400. The size of the scene is largely irrelevant, as you will calculate the real world position of nodes in the scene using a gameSize property which defines the size of the physical space around you. You’ll be at the center of this “scene”.
Place three Color Sprites in the scene and set their properties as follows:
Name: bug, Texture: bug, Position: (-140, 50)
Name: bug, Texture: bug, Position: (0, 150)
Name: firebug, Texture: firebug, Position: (160, 120)
Imagine you’re at the center of the scene. You’ll have one bug on your left, one straight in front of you and the firebug on your right.
2D Design to 3D World
You’ve laid out the bugs in a 2D scene, but you need to position them in a 3D perspective. From a top view, looking down on the 2D scene, the 2D x-axis maps to the 3D x-axis. However, the 2D y-axis maps to the 3D z-axis — this determines how far away the bugs are from you. There is no mapping for the 3D y-axis — you’ll simply randomize this value.
In GameScene.swift, set up a game size constant to determine the real world area that you’ll play the game in. This will be a 2-meter by 2-meter space with you in the middle. In this example, you’ll be setting the game size to be a small area so you can test the game indoors. If you play outside, you’ll be able to set the game size larger:
let gameSize = CGSize(width: 2, height: 2)
Replace setUpWorld() with the following code:
private func setUpWorld() { guard let currentFrame = sceneView.session.currentFrame, // 1 let scene = SKScene(fileNamed: "Level1") else { return } for node in scene.children { if let node = node as? SKSpriteNode { var translation = matrix_identity_float4x4 // 2 let positionX = node.position.x / scene.size.width let positionY = node.position.y / scene.size.height translation.columns.3.x = Float(positionX * gameSize.width) translation.columns.3.z = -Float(positionY * gameSize.height) let transform = currentFrame.camera.transform * translation let anchor = ARAnchor(transform: transform) sceneView.session.add(anchor: anchor) } } isWorldSetUp = true }
Taking each numbered comment in turn:
Here you load the scene, complete with bugs from Level1.sks.
You calculate the position of the node relative to the size of the scene. ARKit translations are measured in meters. Turning 2D into 3D, you use the y-coordinate of the 2D scene as the z-coordinate in 3D space. Using these values, you create the anchor and the view’s delegate will add the SKSpriteNode bug for each anchor as before.
The 3D y value — that’s the up and down axis — will be zero. That means the node will be added at the same vertical position as the camera. Later you’ll randomize this value.
Build and run and see the bugs laid out around you.
The firebug on your right still has the orange bug texture instead of the red firebug texture. You’re creating it in ARSKViewDelegate’s view(_:nodeFor:), which currently doesn’t distinguish between different types of bug. You’ll adjust this later.
First, you’ll randomize the y-position of the bugs.
Still in setUpWorld(), before:
let transform = currentFrame.camera.transform * translation
add this:
translation.columns.3.y = Float(drand48() - 0.5)
drand48() creates a random value between 0 and 1. Here you step it down to create a random value between -0.5 and 0.5. Assigning it to the translation matrix means the bug will appear in a random position between half a meter above the position of the device and half a meter below the position of the device. For this game to work properly, it assumes the user is holding the device at least half a meter off the ground.
To get a random number, you’ll initialize drand48(); otherwise the random number will be the same every time you run it. This can be good while you’re testing, but not so good in a real game.
Add this to the end of didMove(to:) to seed the random number generator:
srand48(Int(Date.timeIntervalSinceReferenceDate))
Build and run, and your bugs should show themselves in the same location, but at a different height.
Note: This was a lucky shot — you probably won’t see the scary alien ray.
Firebugs
Currently the game only has one type of bug in it. Now you’re going to create different nodes for bugs, firebugs and bug spray.
In Types.swift, add this enum:
enum NodeType: String { case bug = "bug" case firebug = "firebug" case bugspray = "bugspray" }
New nodes will be attached to an anchor, so the anchor needs to have a type property to track the type of node you want to create.
Create a new file with the iOS/Source/Cocoa Touch Class template. Name the class Anchor with a subclass of ARAnchor. At the top of the file, replace the import statement with:
import ARKit
Add this new property to hold the type of node associated with the anchor:
var type: NodeType?
In GameScene.swift, in setUpWorld(), in the for loop, replace this:
let anchor = ARAnchor(transform: transform) sceneView.session.add(anchor: anchor)
with this:
let anchor = Anchor(transform: transform) if let name = node.name, let type = NodeType(rawValue: name) { anchor.type = type sceneView.session.add(anchor: anchor) }
Here you get the type of the bug from the SKSpriteNode name you specified in Level1.sks. You create the new anchor, specifying the type.
In GameViewController.swift, in the ARSKViewDelegate extension, replace view(_:nodeFor:) with this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { var node: SKNode? if let anchor = anchor as? Anchor { if let type = anchor.type { node = SKSpriteNode(imageNamed: type.rawValue) node?.name = type.rawValue } } return node }
You check to see whether the anchor being added is of the subclass Anchor. If it is, then you create the appropriate SKSpriteNode using the anchor’s type.
Build and run, and this time the firebug on your right should have the correct red texture. Oh… also, you can’t kill it.
Anchor Collision
Remember how in Pest Control you needed bug spray to kill a firebug? Your current game will also place bug spray randomly around the scene, but the player will pick up bug spray by moving the phone over the bug spray canister. The player will then be able to kill one firebug with that bug spray.
The steps you’ll take are as follows:
Add bug spray at a random position when you add a firebug.
Check the distance of your device and the bug spray nodes each frame.
If a collision occurs, “pick up” bug spray by removing the anchor and bug spray node and providing a visual cue.
In GameScene.swift, add a new method to GameScene:
private func addBugSpray(to currentFrame: ARFrame) { var translation = matrix_identity_float4x4 translation.columns.3.x = Float(drand48()*2 - 1) translation.columns.3.z = -Float(drand48()*2 - 1) translation.columns.3.y = Float(drand48() - 0.5) let transform = currentFrame.camera.transform * translation let anchor = Anchor(transform: transform) anchor.type = .bugspray sceneView.session.add(anchor: anchor) }
In this method, you add a new anchor of type bugspray with a random position. You randomize the x (side) and z (forward/back) values between -1 and 1 and the y (up/down) value between -0.5 and 0.5.
In setUpWorld(), call this method when you add a firebug. After this:
sceneView.session.add(anchor: anchor)
add this:
if anchor.type == .firebug { addBugSpray(to: currentFrame) }
Build and run to ensure that you are getting the same amount of bug spray as firebugs in your game.
Hint: If you can’t see the bug spray, walk away from the game area and point the device back. You should be able see all of the nodes in the game area. You can even see through walls! And if you still can’t see it, try commenting out the lighting code in update(_:).
Now for the collision. This is a simplified collision, as a real physics engine would have more efficient algorithms.
update(_:) runs every frame, so you’ll be able to check the current distance of the bug spray from the device by using the camera’s transformation matrix and the bug spray’s anchor’s transformation matrix.
You’ll also need a method to remove the bug spray and its anchor when the collision is successful.
Add this method to GameScene to remove the bug spray and make a “success” sound:
private func remove(bugspray anchor: ARAnchor) { run(Sounds.bugspray) sceneView.session.remove(anchor: anchor) }
Here you set up an SKAction to make a sound and then remove the anchor. This also removes the SKNode attached to the anchor.
At the end of update(_:), add:
// 1 for anchor in currentFrame.anchors { // 2 guard let node = sceneView.node(for: anchor), node.name == NodeType.bugspray.rawValue else { continue } // 3 let distance = simd_distance(anchor.transform.columns.3, currentFrame.camera.transform.columns.3) // 4 if distance < 0.1 { remove(bugspray: anchor) break } }
Going through this code point-by-point:
You process all of the anchors attached to the current frame,
You check whether the node for the anchor is of type bugspray. At the time of writing, there is an Xcode bug whereby subclasses of ARAnchor lose their properties, so you can’t check the anchor type directly.
ARKit includes the framework simd, which provides a distance function. You use this to calculate the distance between the anchor and the camera.
If the distance is less than 10 centimeters, you remove the anchor from the session. This will remove the bug spray node as well.
You should give the player a visual cue when she manages to collide the device with the bug spray and pick it up. You’ll set up a property which changes the sight image when it is changed.
Add this property to GameScene:
var hasBugspray = false { didSet { let sightImageName = hasBugspray ? "bugspraySight" : "sight" sight.texture = SKTexture(imageNamed: sightImageName) } }
When you set hasBugSpray to true, you change the sight to a different texture, indicating that you’re carrying the ultimate firebug exterminator.
At the end of remove(bugspray:), add this:
hasBugspray = true
Build and run and see if you can pick up a bug spray canister. Notice that while you’re holding a bug spray canister, the sight texture changes to a green one.
Firebug Destruction
In touchesBegan(_:with:), locate the for loop where you set up hitBug with the hit SKSpriteNode.
Replace this:
if node.name == "bug" {
with this:
if node.name == NodeType.bug.rawValue || (node.name == NodeType.firebug.rawValue && hasBugspray) {
As well as checking to see if the node is a “bug”, you can now check to see if it’s a firebug. If it is a firebug and you have bug spray, then you’ve scored a hit.
At the end of touchesBegan(_:with:), add this:
hasBugspray = false
You only get one shot with the bug spray. If you miss, beware! You can no longer kill the firebug.
Where to Go From Here?
You can download the final project for this tutorial here.
Congratulations! At this point you have an almost playable game, and you’ve experimented with 2D ARKit. I hope I’ve whet your appetite for the potential of augmented reality, so you’ll move into the more exciting third dimension. Our book 3D iOS Games by Tutorials, available here will teach you how to work with SceneKit so you’ll be comfortable using 3D objects in 3D space.
Using ARKit and SceneKit, you’ll be able to move 3D models around the scene instead of attaching them to a stationary anchor. You’ll be able to walk around them and examine the model from behind. You’ll also be able to gather information about flat surfaces, place your models on a surface and measure distances accurately.
If you enjoyed what you learned in this tutorial, why not check out the complete 2D Apple Games by Tutorials book, available on our store?
Here’s a taste of what’s in the book:
Section I: Getting Started: This section covers the basics of making 2D games with SpriteKit. These are the most important techniques, the ones you’ll use in almost every game you make. By the time you reach the end of this section, you’ll be ready to make your own simple game. Throughout this section, you’ll create an action game named Zombie Conga, where you take the role of a happy-go-lucky zombie who just wants to party!
Section II: Physics and Nodes: In this section, you’ll learn how to use the built-in 2D physics engine included with SpriteKit. You’ll also learn how to use special types of nodes that allow you to play videos and create shapes in your game. In the process, you’ll create a physics puzzle game named Cat Nap, where you take the role of a cat who has had a long day and just wants to go to bed.
Section III: Tile Maps: In this section, you’ll learn about tile maps in SpriteKit and how to save and load game data. In the process, you’ll create a game named Pest Control, where you take control of a vigorous, impossibly ripped he-man named Arnie. Your job is to lead Arnie to bug-fighting victory by squishing all those pesky bugs.
Section IV: Juice: In this section, you’ll learn how to take a good game and make it great by adding a ton of special effects and excitement — also known as “juice.” In the process, you’ll create a game named Drop Charge, where you’re a space hero with a mission to blow up an alien space ship — and escape with your life before it explodes. To do this, you must jump from platform to platform, collecting special boosts along the way. Just be careful not to fall into the red hot lava!
Section V: Other Platforms: In this section, you’ll learn how to leverage your iOS knowledge to build games for the other Apple Platforms: macOS, tvOS and watchOS. In the process, you’ll create a game named Zombie Piranhas. In this game, your goal is to catch as many fish as possible without hooking a zombie — because we all know what happens when zombies are around.
Section VI: Advanced Topics: In this section, you’ll learn some APIs other than SpriteKit that are good to know when making games for the Apple platforms. In particular, you’ll learn how to add Game Center leaderboards and achievements into your game. You’ll also learn how to use the ReplayKit API. In the process, you’ll integrate these APIs into a top-down racing game named Circuit Racer, where you take the role of an elite race car driver out to set a world record — which wouldn’t be a problem if all this debris wasn’t on the track!
Section VII: Bonus Chapters: On top of the above, we included two bonus chapters. You can learn about the new ARKit framework by reworking the Pest Control game and turning it into an Augmented Reality game. If you liked the art in these mini-games and want to learn how to either hire an artist or make some art of your own, there’s a chapter to guide you through drawing a cute cat in the style of this book with Illustrator.
By the end of this book, you’ll have some great hands-on experience with how to build exciting, good-looking games using Swift and SpriteKit!
And to help sweeten the deal, the digital edition of the book is on sale for $49.99! But don’t wait — this sale price is only available for a limited time.
Speaking of sweet deals, be sure to check out the great prizes we’re giving away this year with the iOS 11 Launch Party, including over $9,000 in giveaways!
To enter, simply retweet this post using the #ios11launchparty hashtag by using the button below:
Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');
We hope you enjoy this update to one of our most-loved books. Stay tuned for more book releases and updates coming soon!
The post Augmented Reality and ARKit Tutorial appeared first on Ray Wenderlich.
Augmented Reality and ARKit Tutorial published first on http://ift.tt/2fA8nUr
0 notes
saikyoapp · 12 years ago
Link
ベターなのは戻った時にスッとフェードアウトして非選択状態に戻る動作だと思います。戻った時に、単に未選択になるより、どれを選択していたか一瞬見える方がよさそうです。これを実現するためにviewWillAppearで以下のように実装しています。
- (void)viewWillAppear:(BOOL)animated
{
[_tableView deselectRowAtIndexPath:_tableView.indexPathForSelectedRow animated:YES];
}
こ、これだーーーー!なぜかナビゲーションのうち、一つのテーブルだけこのフェードがされなくて違和感あったんですよね。
「uitableview 選択行 戻る」でググりました。
0 notes