invasivecode
invasivecode
Blog
116 posts
Don't wanna be here? Send us removal request.
invasivecode · 7 years ago
Photo
Tumblr media
“With Apple, you’re the customer. With Google, you’re the product.”
My friend has taken to repeating this phrase every time Google is outed in another privacy scandal. So my friend says it a lot.
With Apple, you’re the customer. With Google, you’re the product.
Most of us got our first glimpse of what it means to be an Apple customer back in 2014 when Apple released Apple Pay and vowed users wouldn’t be profiled, saying they would use their AI tech to prevent their users’ purchases from being tracked. We saw it again in 2016 when Apple went toe-to-toe with the FBI over a locked iPhone. The government wanted Apple to hack into the device and turn over encrypted data, and then build a permanent loophole into all devices for similar policing situations; Apple refused. At that point, some questioned the company’s endgame, but no one could doubt their stance on user privacy. In the years since, that commitment has only continued to grow, while Apple’s reputation as the data watchmen of the industry has solidified.
Keep reading
1 note · View note
invasivecode · 7 years ago
Photo
Tumblr media
AI, AR, UX & VR. We’re Living In The Future. Designers Need To Be Prepared.
The Waze traffic navigation app, Pokémon Go, Google Cardboard, the Amazon shopping recommendation tool — each of these uses the kind of technology once found in sci-fi stories and futuristic fantasies. But the very existence of these programs, applications, and devices, and the fact that many of them are in daily use, demonstrates that the future is now.
Keep Reading
1 note · View note
invasivecode · 7 years ago
Photo
Tumblr media
HomePod is Not Echo and More Things You Should Know
Eight+ months after it debuted at WWDC, and with a months’-long postponement that made many an Apple enthusiast nearly lose their mind, Apple’s HomePod finally became available for purchase on January 26 (in the US, UK, and Australia. HomePod is scheduled to land in Germany and France this spring). The AI-enabled home speaker is slated for a February 9 arrival.
Now that HomePod is finally here, we’ll take you through what you need to know about it. We’ll also confront some of the rumors and tell you what HomePod is not.
Keep reading
2 notes · View notes
invasivecode · 7 years ago
Photo
Tumblr media
SceneKit Tutorial Part 2
Let’s continue to look at the functionalities and APIs offered by SceneKit. In a previous tutorial, I highlighted some of the basic classes you can use to create and render a 3D scene and add 3D objects to the scene using the builtin shapes. In this second part tutorial, we are going to see how to handle the appearance of a node. The information provided in this post and the previous one is relevant if you are building ARKit applications, so I suggest you to review the concepts highlighted in Part I and check the new topics provided here.
Keep reading
0 notes
invasivecode · 7 years ago
Photo
Tumblr media
A Story of Shazam, Siri & Spotify
Depending who you talk to, Apple’s December 11 purchase of London-based Shazam was either the smartest thing to happen to Siri and Apple Music – or the best excuse, ever, to start listening to SoundHound.
Apple shelled out a reported $400 million for the “name that tune” app, down from a 2015 valuation of $1 billion. The deep discount wasn’t Apple’s only advantage in the sale. They bought Shazam right out from under Snap Inc. and Spotify – companies that were showing interest in purchasing the app for themselves, and who have existing relationships with Shazam. Spotify is also Apple Music’s largest competitor.
Keep reading
0 notes
invasivecode · 8 years ago
Photo
Tumblr media
SceneKit Tutorial Part 1
We are going to start a new serie of tutorials about SceneKit, the framework that allows you to build and manipulate 3D objects in a 3D scene. SceneKit was introduced for the first time in macOS 10.8 (Mountain Lion) and sucessively in iOS 8. Recently, SceneKit was added to watchOS 3.0 and tvOS 9.
SceneKit allows developers to create 3D games and add 3D content to apps using high-level scene descriptions. Because of the recent introduction of ARKit, SceneKit is today a very relevant framework. If you want to build Augmented Reality application, you should learn SceneKit. The framework provides a reach set of APIs that make easy to add animations, physics simulation, particle effects, and realistic physically based rendering.
Keep reading
0 notes
invasivecode · 8 years ago
Photo
Tumblr media
Metal video processing for iOS and tvOS
Real-time video processing is a particular case of digital signal processing. Technologies such as Virtual Reality (VR) and Augmented Reality (AR) strongly rely on real-time video processing to extract semantic information from each video frame and use it for object detection and tracking, face recognition, and other computer vision techniques.
Processing video in realtime on a mobile device is a quite complex task, because of the limited resources available on smartphones and tablets, but you can achieve amazing results when using the right techniques.
Keep reading
0 notes
invasivecode · 8 years ago
Photo
Tumblr media
Natural Language Processing and Speech Recognition in iOS
Natural Language Processing (NLP) is a field of Artificial Intelligence (AI) and Computational Linguistics (CL) concerned with the interactions between computers and human natural languages. NPL is related to the area of Human-Computer Interaction (HCI) and the ability of a computer program to understand human speech as it is spoken.
Speech Recognition (SR) is a sub-field of computational linguistics that develops methodologies and technologies enabling the recognition and translation of spoken language into text by computers.
Keep reading
0 notes
invasivecode · 9 years ago
Photo
Tumblr media
Network Reachability in Swift 3
Almost every mobile app needs to connect to the Internet at some point to retrieve data from a host or service or upload new data to it. However, the Internet connection is not always available, and its availability can change at any time. To know the state of the system's current network and if a host or service is reachable through that network, we can use the SCNetworkReachability.
This API is part of Core Foundation, so it written in C. If you are using Swift this API is not really straightforward to use for many people.
Keep reading
0 notes
invasivecode · 9 years ago
Photo
Tumblr media
Interactive View Animations in iOS 10
At the last WWDC, Apple introduced a new API to create interactive animations in iOS 10. In this post, I want to show you how to use this new API and build a new kinds of animations, where the user can pause and scrub the animation and interact with the animated object.
The main class of this new API is UIViewPropertyAnimator. This class allows you to animate views from start to finish as you would usually do with the old UIView animation API. In addition, UIViewPropertyAnimation allows you to interact with the animated object and pause and restart the animations. UIViewPropertyAnimator class adopts two protocols: UIViewAnimating and UIViewImplicitlyAnimating. These protocols add additional functionalities to the main class. Let's start to use the UIViewPropertyAnimator class and build a very simple animation.
Keep reading
0 notes
invasivecode · 9 years ago
Photo
Tumblr media
Convolutional Neural Networks in iOS 10 and macOS
In iOS 10 and macOS 10.12, Apple introduces new Convolutional Neural Network APIs in the Metal Performance Shaders Framework and the Accelerate Framework.
In a previous post, I already provided you with an introduction on Machine Learning (ML) and Artificial Neural Networks (ANN) for iOS. If you are not familiar with this topics, I suggest to read that post, first.
I recently attended CVPR 2016, a scientific conference on Computer Vision and Pattern Recognition. There, I learnt that Convolutional Neural Networks are used in almost every research work recently done by universities and companies around the world. The popularity of the Convolutional Neural Networks in different fields of computer vision and the availability of fast and powerful GPUs on smartphones made these neural networks a very attractive tool also for mobile development. Convolutional Neural Networks and Deep Learning open a wide range of innovative mobile applications.
Keep reading
2 notes · View notes
invasivecode · 9 years ago
Photo
Tumblr media
Machine Learning in Swift for iOS
WWDC16 just ended and Apple left us with new amazing innovative APIs. This year speech recognition, proactive applications, deep learning, user intents, and neural networks have been the most frequent terms used during the conference. So, besides a new rich version 3 of Swift, almost every new addition to iOS, tvOS and macOS is related with artificial intelligence. For example, Metal and Accelerate in iOS 10 provide an implementation of convolutional neural networks (CNNs) for the GPU and CPU respectively. During the keynote, Craig Federighi (Apple's SVP Software Engineering) showed how the Photos app on iOS organizes our photos according to different smart criteria. He highlighted that Photos app uses deep learning to provide such functionality. Also, Federighi showed how Siri, now available to developers, can suggest what we need.
Keep reading
0 notes
invasivecode · 9 years ago
Photo
Tumblr media
Investing in custom development
With the launch of SignaKit by iNVASIVECODE we would like to explain why you should invest in custom built code. SignaKit is a high-level iOS framework that helps you create handwritten signatures and add them to PDF documents. SignaKit is code that you pay for and comes with the support needed to make sure that it works through future Apple update. When developing an app you have the choice of using third-party code and open-source libraries, or creating custom built code. Using open-source code might seem simple, efficient and cost-effective but there are implications to this you might want to consider. In the long run this might actually cost you more in terms of money and time.
Keep reading
1 note · View note
invasivecode · 9 years ago
Photo
Tumblr media
Apple App Distribution
Companies often ask us for advice in what the current possibilities are for developing an app for their own use inside the company. Sometimes they are confused about which Apple Developer Program they have to choose, or how to distribute the app to their employees, or if they have to publish that App on the App Store or not. They have heard of MDM, VPP, Custom B2B, Enterprise Program and other acronyms and want to put the pieces of the puzzle in the right place. I will try to do that in this post.
Keep reading
2 notes · View notes
invasivecode · 9 years ago
Photo
Tumblr media
Replicator Layer
In this tutorial I am going to show you how to create a very interesting animation with replicator layers, instances of the CAReplicatorLayer class. After building so many apps and using every single framework of Cocoa Touch, every time I build something new for a customer, I feel the need to spend some time researching new functionalities and interaction patterns to make the solution we provide really unique. Most of the time, I focus on usability and try to push the limits of Cocoa Touch.
Keep reading
1 note · View note
invasivecode · 9 years ago
Photo
Tumblr media
tvOS Focus Engine and Collection View
I recently did some consulting work for a new startup here in San Francisco. I built for them a couple of quite complex UI animations for an Apple TV app. Doing so, I was able to work very closely with tvOS and its very interesting interaction mechanism: the Focus Engine. I am quite fascinated by this new interaction model. After spending almost 8 years with touch-based devices, this new toy brings to my work some fresh air. I am going to demonstrate you how the Focus Engine works using the Watch OS UI example I built it here some month ago. Please, review that example if you haven't done it so far, because I am going to use it as it is without going again into details. There is some math involved, but complex UI animations sometimes require some math.
Read more
1 note · View note
invasivecode · 9 years ago
Photo
Tumblr media
Capture Video with AVFoundation and Swift
AVFoundation allows you to capture multimedia data generated by different input sources (camera, microphone, ...) and redirect them to any output destination (screen, speakers, render context, ...).
Some years ago, I wrote this post on how to build a custom video camera based on AVFoundation using Objective-C. At that time, Swift did not exist. Recently, we received so many requests to show how to build the same custom video camera using Swift. So, here I am going to show you how to do that.
Keep reading
3 notes · View notes