#AVKit
Explore tagged Tumblr posts
Link
Indian student Akshat Srivastava, a 22-year-old from BITSani, Goa, has recently achieved significant recognition by winning the prestigious Swift Student Challenge. This accomplishment brought him the unique opportunity to meet Apple CEO Tim Cook, who praised Srivastava's innovative work in app development. The young developer’s journey and his app, MindBud, are exemplary of the burgeoning talent and creativity among Indian students. Indian Student Akshat Srivastava The Swift Student Challenge: A Prestigious Platform The Swift Student Challenge is an annual competition hosted by Apple, aimed at encouraging young developers to showcase their coding skills and creativity. Winning this challenge is a testament to the participant's technical prowess and innovative thinking. Akshat Srivastava's success in this competition is a proud moment for India, highlighting the country's growing pool of talented developers. MindBud: Merging Classic Games with Modern Technology The Inspiration Behind MindBud Akshat Srivastava’s award-winning app, MindBud, was inspired by playful interactions with his nephew. Recognizing the importance of fostering both analytical and creative thinking in children, Srivastava developed MindBud to merge classic games with new technology. MindBud features four mini-games designed to be enjoyed by kids with their family and friends, promoting both fun and learning. Technological Excellence Srivastava utilized SwiftUI, AVKit, PencilKit, and FileManager to create an immersive and interactive experience with MindBud. These technologies allowed him to integrate modern features seamlessly into classic games, enhancing the user experience while maintaining the nostalgic charm of traditional games. Tim Cook's Admiration for Srivastava During their meeting, Tim Cook expressed his admiration for Srivastava's innovative spirit and dedication. Cook shared his thoughts on X (formerly Twitter), saying, “I met so many extraordinary developers when I visited India last year. It was equally wonderful to meet Akshat this week and see how he's created a whole new way to share his love of classic games with the next generation.” Srivastava’s Journey: From Covid-19 Crisis to WWDC 2024 Early Beginnings Srivastava’s journey in app development began during the Covid-19 pandemic. He developed an app to track vacant hospital beds via social media posts, showcasing his ability to leverage technology for social good. This early experience laid the foundation for his continued passion for app development. Recognition and Future Prospects Winning the Swift Student Challenge earned Srivastava a place among the 50 Distinguished Winners invited to WWDC 2024. This accolade not only recognizes his current achievements but also sets the stage for his future endeavors in technology. WWDC 2024 and Distinguished Winners A Unique Opportunity The 50 Distinguished Winners of the Swift Student Challenge are invited to a three-day in-person experience at Apple Park. This exclusive event includes tailored programming and special events designed to inspire and educate the next generation of developers. Impact on Indian Developers Tim Cook highlighted the significant contributions of Indian developers during WWDC 2024, saying, “Akshat is part of a growing generation of developers from all across India who are bringing their best ideas to life through coding and making an important impact in their communities and around the world.” This acknowledgment underscores the increasing influence of Indian developers on the global tech landscape. FAQs What is the Swift Student Challenge? The Swift Student Challenge is an annual competition hosted by Apple to encourage young developers to showcase their coding skills and creativity through innovative app projects. Who is Akshat Srivastava? Akshat Srivastava is a 22-year-old student from BITSani, Goa, who won the Swift Student Challenge with his app MindBud. What is MindBud? MindBud is an app developed by Akshat Srivastava that blends classic games with modern technology to foster analytical and creative thinking in children. How did Tim Cook react to Srivastava’s app? Tim Cook expressed admiration for Srivastava’s innovation and dedication, praising the app for its creativity and impact on the next generation. What opportunities do Swift Student Challenge winners get? Winners of the Swift Student Challenge are invited to a three-day in-person experience at Apple Park during WWDC, featuring tailored programming and special events.
#AkshatSrivastava#AppDevelopment#AppleCEO#AVKit#BITSaniGoa#codingcompetition#FileManager#Indiandevelopers#IndianStudentAkshatSrivastava#innovativeapps#MindBudapp#PencilKit#SwiftStudentChallenge#SwiftUI#TimCook#WWDC2024
0 notes
Text
Hire iOS Developers to Build High-Quality Applications
In today’s competitive digital landscape, having a robust iOS application is crucial for businesses aiming to reach a vast audience of Apple users. Partnering with skilled iOS developers can transform your innovative ideas into high-performing applications that captivate users and drive business growth.
Why Hire Professional iOS Developers?
Professional iOS developers bring a wealth of expertise to your project, ensuring that your application is not only functional but also optimized for performance and user experience. They are proficient in various frameworks and technologies, including:
Frameworks:
UIKit
ARKit
CoreBluetooth
AVKit
AVFoundation
Databases:
SQLite
CoreData
Firestore
Realm
This technical proficiency ensures that your app is built using the latest industry standards, providing a seamless and engaging user experience.
Flexible Hiring Models to Suit Your Needs
Understanding that each project has unique requirements, we offer flexible hiring models to align with your specific needs:
Hourly: Ideal for short-term tasks or ongoing support, allowing you to pay only for the hours utilized.
Monthly: Suitable for mid-term projects, providing dedicated developer support for a fixed monthly fee.
Long-Term: Perfect for extensive projects requiring a committed partnership, ensuring consistent development efforts over an extended period.
These adaptable options empower you to manage your resources effectively while ensuring access to top-tier development talent.
Streamlined Hiring Process
Our efficient hiring process is designed to connect you with the right talent swiftly:
Inquiry: Share your project ideas and requirements with us, assured that all information is kept confidential.
Select CV: Review a curated list of potential iOS developers tailored to your project’s needs.
Interview: Conduct interviews to assess the candidates’ skills and compatibility with your project.
Terms & Contract: Define the terms of engagement and formalize the partnership through a contract.
Get Started: Begin your project with your chosen developer, setting the stage for successful collaboration.
This structured approach ensures a seamless onboarding experience, allowing your project to progress without unnecessary delays.
Embracing the Latest SEO Practices
With the latest search engine updates emphasizing high-quality, user-centric content, it’s essential to ensure that your application’s online presence is optimized. This includes focusing on clear, concise meta titles and descriptions, as well as delivering valuable content that meets user intent.
Conclusion
Investing in professional iOS development is a strategic move to enhance your business’s digital footprint. By hiring skilled developers, you ensure that your application is built to the highest standards, offering an exceptional user experience that can drive engagement and growth.
Partner with us to bring your iOS application vision to life, leveraging our expertise to achieve your business objectives.
0 notes
Text
Cursed alignment chart of my OCs
Groovy Hobgoblin: Locust, long furby
Wacky Hobgoblin: Reena, Mimicree
Repressed Hobgoblin: Vampire, Secreechie
Groovy Clown: Nervig, Hotel Mutt
Wacky Clown: Cass, Unknown Species
Repressed Clown: Juno, Mimicree
Groovy Cryptid: Incipere, Dragon
Wacky Cryptid: Kaiator, Dragon
Repressed Cryptid: Virgil, Avkit
3 notes
·
View notes
Text
SwiftUI Video Player Tutorial
The VideoPlayer is a view which embeds a AVPlayer including all controls for playblack. In this tutorial a video is loaded from a URL and played in a a VideoPlayer view. This tutorial is built for iOS 14 with Xcode 12 which can be download at the Apple developer portal.
Open Xcode and either click Create a new Xcode project in Xcode’s startup window, or choose File > New > Project. In the template selector, select iOS as the platform, select App template in the Application section and then click Next.
Enter SwiftUIVideoPlayerTutorial as the Product Name, select SwiftUI as Interface, SwiftUI App as Life Cycle and Swift as Language. Deselect the Include Tests checkbox and click Next. Choose a location to save the project on your Mac.
In the canvas, click Resume to display the preview. If the canvas isn’t visible, select Editor > Editor and Canvas to show it.
In the Project navigator, click to select ContentView.swift. Change the code inside the ContentView struct to
import SwiftUI import AVKit struct ContentView: View { private let player = AVPlayer(url: URL(string: "https://www.learningcontainer.com/wp-content/uploads/2020/05/sample-mp4-file.mp4")!) var body: some View { VideoPlayer(player: player) .onAppear() { player.play() } } }
The player property creates a new player to play a video resource referenced by the given URL. Inside The body View the VideoPlayer is displayed. The onappear modifier is used to auto-play the video using the play() method of AVPlayer.
Go to the Preview pane and select the Live Preview button, the video will start playing.
The source code of the SwiftUIVideoPlayerTutorial can be downloaded at the ioscreator repository on Github.
0 notes
Photo
THE FOLLOW AVENGERS HAVE LEFT THE ACADEMY!
@avkit
0 notes
Link
via Twitter https://twitter.com/vajrakayat
0 notes
Quote
I would like to thank my arms for always being by my side. My legs for always supporting me and finally my finger; because I could always count on them.
Bumb Avkit
0 notes
Text
How to Hire iOS Developers in India
Are you ready to elevate your iOS app development ? Whether you’re embarking on a groundbreaking app venture or seeking to enhance your existing iOS ecosystem, Qono Tech presents the perfect solution: hire iOS developers in India. Our skilled professionals are poised to be your dependable collaborators in achieving unprecedented success. Let’s delve into how hiring iOS developers can propel your projects to new heights.

Why Hire iOS Developers from India?
India has emerged as a global hub for software development, and for good reason. With a vast pool of talent, cost-effectiveness, and a strong work ethic, Indian iOS developers bring unparalleled value to your projects. By tapping into this talent pool, you gain access to exceptional expertise without breaking the bank.
Our Service Packages
At Qono Tech, we understand that every project is unique. That’s why we offer three flexible hiring packages tailored to your specific needs:
Why Hire iOS Developers from India?
India has emerged as a global hub for software development, and for good reason. With a vast pool of talent, cost-effectiveness, and a strong work ethic, Indian iOS developers bring unparalleled value to your projects. By tapping into this talent pool, you gain access to exceptional expertise without breaking the bank.
Our Service Packages
At Qono Tech, we understand that every project is unique. That’s why we offer three flexible hiring packages tailored to your specific needs:
Hourly: Pay only for the hours you need and gain access to top-notch iOS developers for your specific tasks.
Monthly: Ideal for businesses with mid-term development needs, our monthly package offers fixed pricing for your peace of mind.
Long-Term: For large-scale projects or ongoing development tasks, our long-term hiring package ensures a committed and extended partnership.
How to Hire iOS Developers in 5 Easy Steps
Inquiry: Share your project ideas and requirements with us securely and confidentially.
Select CV: Review CVs of potential iOS developers and shortlist the best fit for your project.
Assessment: Conduct interviews to assess candidates’ skills and abilities.
Trial Run: Enjoy a complimentary 40-hour trial period to evaluate our developers’ expertise.
Add Resource to Your Team: Formalize the onboarding process and welcome your new iOS developer to your team.
Our iOS Developers Framework & Technology Expertise
Our developers boast expertise in a wide array of frameworks, databases, libraries, tools, and utilities, including:
Frameworks: UIKit, ARKit, CoreBluetooth, AVKit, AVFoundation
Database: SQLite, CoreData, Firestore, Realm
Libraries: Alamofire, Firebase, SwiftyJSON, MBProgressHUD, Kingfisher
Tools & Utilities: Swift, Objective-C, SwiftUI, Xcode, TestFlight, XCTest, Detox, EarlGrey, Appium, JUnit/NUnit/xUnit
Other: Chat, Charts, Map, Localization, Audio/Video Call
Looking For Dedicated iOS Developers?
Qono Tech boasts a highly qualified and expert team of designers and developers ready to bring your project to life. Start planning and executing your project today with our dedicated iOS developers in India.
Ready to revolutionize your iOS app development journey? Hire iOS developers from Qono Tech today and embark on a path to unparalleled success!
0 notes
Link
via Twitter https://twitter.com/vajrakayat
0 notes
Text
How to Play, Record, and Merge Videos in iOS and Swift
Update note: This tutorial has been updated to iOS 11 and Swift 4 by Owen Brown. The original tutorial was written by Abdul Azeem with fixes and clarifications made by Joseph Neuman.
Learn how to play, record, and merge videos on iOS!
Recording videos, and playing around with them programmatically, is one of the coolest things you can do with your phone, but not nearly enough apps make use of it. To do this requires the AV Foundation framework that has been a part of macOS since OS X Lion (10.7), and iOS since iOS 4 in 2010.
AV Foundation has grown considerably since then, with well over 100 classes now. This tutorial covers media playback and some light editing to get you started with AV Foundation. In particular, you’ll learn how to:
Select and play a video from the media library.
Record and save a video to the media library.
Merge multiple videos together into a combined video, complete with a custom soundtrack!
I don’t recommend running the code in this tutorial on the simulator, because you’ll have no way to capture video. Plus, you’ll need to figure out a way to add videos to the media library manually. In other words, you really need to test this code on a device! To do that you’ll need to be a registered Apple developer. A free account will work just fine for this tutorial.
Ready? Lights, cameras, action!
Getting Started
Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). This project contains a storyboard and several view controllers with the UI for a simple video playback and recording app.
The main screen contains the three buttons below that segue to other view controllers:
Select and Play Video
Record and Save Video
Merge Video
Build and run the project, and test out the buttons; only the three buttons on the initial scene do anything, but you will change that soon!
Select and Play Video
The “Select and Play Video” button on the main screen segues to PlayVideoController. In this section of the tutorial, you’ll add the code to select a video file and play it.
Start by opening PlayVideoViewController.swift, and add the following import statements at the top of the file:
import AVKit import MobileCoreServices
Importing AVKit gives you access to the AVPlayer object that plays the selected video. MobileCoreServices contains predefined constants such as kUTTypeMovie, which you’ll need when selecting videos.
Next, scroll down to the end of the file and add the following class extensions. Make sure you add these to the very bottom of the file, outside the curly braces of the class declaration:
// MARK: - UIImagePickerControllerDelegate extension PlayVideoViewController: UIImagePickerControllerDelegate { } // MARK: - UINavigationControllerDelegate extension PlayVideoViewController: UINavigationControllerDelegate { }
These extensions set up the PlayVideoViewController to adopt the UIImagePickerControllerDelegate and UINavigationControllerDelegate protocols. You’ll be using the system-provided UIImagePickerController to allow the user to to browse videos in the photo library, and that class communicates back to your app through these delegate protocols. Although the class is named “image picker”, rest assured it works with videos too!
Next, head back to PlayVideoViewController‘s main class definition and add a call to helper method from VideoHelper to open the image picker. Later, you’ll add helper tools of your own in VideoHelper. Add the following code to playVideo(_:):
VideoHelper.startMediaBrowser(delegate: self, sourceType: .savedPhotosAlbum)
In the code above, you ensure that tapping Play Video will open the UIImagePickerController, allowing the user to select a video file from the media library.
To see what’s under the hood of this method, open VideoHelper.swift. It does the following:
Check if the .savedPhotosAlbum source is available on the device. Other sources are the camera itself and the photo library. This check is essential whenever you use a UIImagePickerController to pick media. If you don’t do it, you might try to pick media from a non-existent media library, resulting in crashes or other unexpected issues.
If the source you want is available, it creates a UIImagePickerController object and set its source and media type.
Finally, it presents the UIImagePickerController modally.
Now you’re ready to give your project another whirl! Build and run. Tap Select and Play Video on the first screen, and then tap Play Video on the second screen, you should see your videos presented similar to the following screenshot.
Once you see the list of videos, select one. You’ll be taken to another screen that shows the video in detail, along with buttons to cancel, play and choose. If you tap the play button the video will play. However, if you tap the choose button, the app just returns to the Play Video screen! This is because you haven’t implemented any delegate methods to handle choosing a video from the picker.
Back in Xcode, scroll down to the UIImagePickerControllerDelegate class extension in PlayVideoViewController.swift and add the following delegate method implementation:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { // 1 guard let mediaType = info[UIImagePickerControllerMediaType] as? String, mediaType == (kUTTypeMovie as String), let url = info[UIImagePickerControllerMediaURL] as? URL else { return } // 2 dismiss(animated: true) { //3 let player = AVPlayer(url: url) let vcPlayer = AVPlayerViewController() vcPlayer.player = player self.present(vcPlayer, animated: true, completion: nil) } }
Here’s what you’re doing in this method:
You get the media type of the selected media and URL. You ensure it’s type movie.
You dismiss the image picker.
In the completion block, you create an AVPlayerViewController to play the media.
Build and run. Tap Select and Play Video, then Play Video, and choose a video from the list. You should be able to see the video playing in the media player.
Record and Save Video
Now that you have video playback working, it’s time to record a video using the device’s camera and save it to the media library.
Open RecordVideoViewController.swift, and add the following import:
import MobileCoreServices
You’ll also need to adopt the same protocols as PlayVideoViewController, by adding the following to the end of the file:
extension RecordVideoViewController: UIImagePickerControllerDelegate { } extension RecordVideoViewController: UINavigationControllerDelegate { }
Add the following code to record(_:):
VideoHelper.startMediaBrowser(delegate: self, sourceType: .camera)
It uses the same helper method as in PlayVideoViewController, but it accesses the .camera instead to record video.
Build and run to see what you’ve got so far.
Go to the Record screen and tap Record Video. Instead of the Photo Gallery, the camera UI opens. When the alert dialogue asks for camera permissions and mic permissions, click OK. Start recording a video by tapping the red record button at the bottom of the screen, and tap it again when you’re done recording.
Now you can opt to use the recorded video or do a retake. Tap Use Video. You’ll notice that it just dismisses the view controller. That’s because — you guessed it — you haven’t implemented an appropriate delegate method to save the recorded video to the media library.
Add the following method to the UIImagePickerControllerDelegate class extension at the bottom:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { dismiss(animated: true, completion: nil) guard let mediaType = info[UIImagePickerControllerMediaType] as? String, mediaType == (kUTTypeMovie as String), let url = info[UIImagePickerControllerMediaURL] as? URL, UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(url.path) else { return } // Handle a movie capture UISaveVideoAtPathToSavedPhotosAlbum( url.path, self, #selector(video(_:didFinishSavingWithError:contextInfo:)), nil) }
Don’t worry about the error on that last line of code, you’ll take care of that shortly.
As before, the delegate method gives you a URL pointing to the video. You verify that the app can save the file to the device’s photo album, and if so, save it.
UISaveVideoAtPathToSavedPhotosAlbum is the function provided by the SDK to save videos to the Photos Album. As parameters, you pass the path to the video you want to save as well as a target and action to call back, which will inform you of the status of the save operation.
Add the implementation of the callback to the main class definition next:
@objc func video(_ videoPath: String, didFinishSavingWithError error: Error?, contextInfo info: AnyObject) { let title = (error == nil) ? "Success" : "Error" let message = (error == nil) ? "Video was saved" : "Video failed to save" let alert = UIAlertController(title: title, message: message, preferredStyle: .alert) alert.addAction(UIAlertAction(title: "OK", style: UIAlertActionStyle.cancel, handler: nil)) present(alert, animated: true, completion: nil) }
The callback method simply displays an alert to the user, announcing whether the video file was saved or not, based on the error status.
Build and run. Record a video and select Use Video when you’re done recording. If you’re asked for permission to save to your video library, tap OK. If the “Video was saved” alert pops up, you just successfully saved your video to the photo library!
Now that you can play videos and record videos, it’s time to take the next step and try some light video editing.
Merging Videos
The final piece of functionality for the app is to do a little editing. Your user will select two videos and a song from the music library, and the app will combine the two videos and mix in the music.
The project already has a starter implementation in MergeVideoViewController.swift. The code here is similar to the code you wrote to play a video. The big difference is when merging, the user needs to select two videos. That part is already set up, so the user can make two selections that will be stored in firstAsset and secondAsset.
The next step is to add the functionality to select the audio file.
The UIImagePickerController only provides functionality to select video and images from the media library. To select audio files from your music library, you will use the MPMediaPickerController. It works essentially the same as UIImagePickerController, but instead of images and video, it accesses audio files in the media library.
Open MergeVideoViewController.swift and add the following code to loadAudio(_:):
let mediaPickerController = MPMediaPickerController(mediaTypes: .any) mediaPickerController.delegate = self mediaPickerController.prompt = "Select Audio" present(mediaPickerController, animated: true, completion: nil)
The above code creates a new MPMediaPickerController instance and displays it as a modal view controller.
Build and run. Now tap Merge Video, then Load Audio to access the audio library on your device. Of course, you’ll need some audio files on your device. Otherwise, the list will be empty. The songs will also have to be physically present on the device, so make sure you’re not trying to load a song from the cloud.
If you select a song from the list, you’ll notice that nothing happens. That’s right! MPMediaPickerController needs delegate methods! Find the MPMediaPickerControllerDelegate class extension at the bottom of the file and add the following two methods to it:
func mediaPicker(_ mediaPicker: MPMediaPickerController, didPickMediaItems mediaItemCollection: MPMediaItemCollection) { dismiss(animated: true) { let selectedSongs = mediaItemCollection.items guard let song = selectedSongs.first else { return } let url = song.value(forProperty: MPMediaItemPropertyAssetURL) as? URL self.audioAsset = (url == nil) ? nil : AVAsset(url: url!) let title = (url == nil) ? "Asset Not Available" : "Asset Loaded" let message = (url == nil) ? "Audio Not Loaded" : "Audio Loaded" let alert = UIAlertController(title: title, message: message, preferredStyle: .alert) alert.addAction(UIAlertAction(title: "OK", style: .cancel, handler:nil)) self.present(alert, animated: true, completion: nil) } } func mediaPickerDidCancel(_ mediaPicker: MPMediaPickerController) { dismiss(animated: true, completion: nil) }
The code is very similar to the delegate methods for UIImagePickerController. You set the audio asset based on the media item selected via the MPMediaPickerController after ensuring it’s a valid media item. Note that it’s important to only present new view controllers after dismissing the current one, which is why you wrapped the code above inside the completion handler.
Build and run. Go to the Merge Videos screen. Select an audio file and if there are no errors, you should see the “Audio Loaded” message.
You now have all your assets loading correctly. It’s time to merge the various media files into one file. But before you get into that code, you must do a little bit of set up.
Export and Merge
The code to merge your assets will require a completion handler to export the final video to the photos album. Add the code below to MergeVideoViewController:
func exportDidFinish(_ session: AVAssetExportSession) { // Cleanup assets activityMonitor.stopAnimating() firstAsset = nil secondAsset = nil audioAsset = nil guard session.status == AVAssetExportSessionStatus.completed, let outputURL = session.outputURL else { return } let saveVideoToPhotos = { PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL) }) { saved, error in let success = saved && (error == nil) let title = success ? "Success" : "Error" let message = success ? "Video saved" : "Failed to save video" let alert = UIAlertController(title: title, message: message, preferredStyle: .alert) alert.addAction(UIAlertAction(title: "OK", style: UIAlertActionStyle.cancel, handler: nil)) self.present(alert, animated: true, completion: nil) } } // Ensure permission to access Photo Library if PHPhotoLibrary.authorizationStatus() != .authorized { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { saveVideoToPhotos() } } } else { saveVideoToPhotos() } }
Once the export completes successfully, the above code saves the newly exported video to the photo album. You could just display the output video in an AssetBrowser, but it’s easier to copy the output video to the photo album so you can see the final output.
Now, add the following code to merge(_:):
guard let firstAsset = firstAsset, let secondAsset = secondAsset else { return } activityMonitor.startAnimating() // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances. let mixComposition = AVMutableComposition() // 2 - Create two video tracks guard let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return } do { try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration), of: firstAsset.tracks(withMediaType: AVMediaType.video)[0], at: kCMTimeZero) } catch { print("Failed to load first track") return } guard let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return } do { try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration), of: secondAsset.tracks(withMediaType: AVMediaType.video)[0], at: firstAsset.duration) } catch { print("Failed to load second track") return } // 3 - Audio track if let loadedAudioAsset = audioAsset { let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: 0) do { try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration)), of: loadedAudioAsset.tracks(withMediaType: AVMediaType.audio)[0] , at: kCMTimeZero) } catch { print("Failed to load Audio track") } } // 4 - Get path guard let documentDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else { return } let dateFormatter = DateFormatter() dateFormatter.dateStyle = .long dateFormatter.timeStyle = .short let date = dateFormatter.string(from: Date()) let url = documentDirectory.appendingPathComponent("mergeVideo-\(date).mov") // 5 - Create Exporter guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return } exporter.outputURL = url exporter.outputFileType = AVFileType.mov exporter.shouldOptimizeForNetworkUse = true // 6 - Perform the Export exporter.exportAsynchronously() { DispatchQueue.main.async { self.exportDidFinish(exporter) } }
Here’s a step-by-step breakdown of the above code:
You create an AVMutableComposition object to hold your video and audio tracks and transform effects.
Next, you create an AVMutableCompositionTrack for the video and add it to your AVMutableComposition object. Then you insert your two videos to the newly created AVMutableCompositionTrack.
Note that insertTimeRange(_:ofTrack:atStartTime:) allows you to insert a part of a video into your main composition instead of the whole video. This way, you can trim the video to a time range of your choosing.
In this instance, you want to insert the whole video, so you create a time range from kCMTimeZero to your video asset duration. The atStartTime parameter allows you to place your video/audio track wherever you want it in your composition. Notice how the code inserts firstAsset at time zero, and it inserts secondAsset at the end of the first video. This tutorial assumes you want your video assets one after the other. But you can also overlap the assets by playing with the time ranges.
For working with time ranges, you use CMTime structs. CMTime structs are non-opaque mutable structs representing times, where the time could be a timestamp or a duration.
Similarly, you create a new track for your audio and add it to the main composition. This time you set the audio time range to the sum of the duration of the first and second videos, since that will be the complete length of your video.
Before you can save the final video, you need a path for the saved file. So create a unique file name (based upon the current date) that points to a file in the documents folder.
Finally, render and export the merged video. To do this, you create an AVAssetExportSession object that transcodes the contents of an AVAsset source object to create an output of the form described by a specified export preset.
After you’ve initialized an export session with the asset that contains the source media, the export preset name (presetName), and the output file type (outputFileType), you start the export running by invoking exportAsynchronously(). Because the code performs the export asynchronously, this method returns immediately. The code calls the completion handler you supply to exportAsynchronously() whether the export fails, completes, or the user canceled. Upon completion, the exporter’s status property indicates whether the export has completed successfully. If it has failed, the value of the exporter’s error property supplies additional information about the reason for the failure.
An AVComposition instance combines media data from multiple file-based sources. At its top level, an AVComposition is a collection of tracks, each presenting media of a specific type such as audio or video. An instance of AVCompositionTrack represents a single track.
Similarly, AVMutableComposition and AVMutableCompositionTrack also present a higher-level interface for constructing compositions. These objects offer insertion, removal, and scaling operations that you’ve seen already and will come up again.
Go ahead, build and run your project!
Select two videos and an audio files and merge the selected files. If the merge was successful, you should see a “Video Saved” message. At this point, your new video should be present in the photo album.
Go to the photo album, or browse using the Select and Play Video screen within the app. You’ll might notice that although the app merged the videos, there are some orientation issues. Portrait video is in landscape mode, and sometimes videos are turned upside down.
This is due to the default AVAsset orientation. All movie and image files recorded using the default iPhone camera application have the video frame set to landscape, and so the iPhone saves the media in landscape mode.
Video Orientation
AVAsset has a preferredTransform property that contains the media orientation information, and it applies this to a media file whenever you view it using the Photos app or QuickTime. In the code above, you haven’t applied a transform to your AVAsset objects, hence the orientation issue.
You can correct this easily by applying the necessary transforms to your AVAsset objects. But as your two video files can have different orientations, you’ll need to use two separate AVMutableCompositionTrack instances instead of one as you originally did.
Before you can do this, add the following helper method to VideoHelper:
static func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) { var assetOrientation = UIImageOrientation.up var isPortrait = false if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 { assetOrientation = .right isPortrait = true } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 { assetOrientation = .left isPortrait = true } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 { assetOrientation = .up } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 { assetOrientation = .down } return (assetOrientation, isPortrait) }
This code analyzes an affine transform to determine the input video’s orientation.
Next, add one more helper method to the class:
static func videoCompositionInstruction(_ track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction { let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track) let assetTrack = asset.tracks(withMediaType: .video)[0] let transform = assetTrack.preferredTransform let assetInfo = orientationFromTransform(transform) var scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.width if assetInfo.isPortrait { scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio) instruction.setTransform(assetTrack.preferredTransform.concatenating(scaleFactor), at: kCMTimeZero) } else { let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio) var concat = assetTrack.preferredTransform.concatenating(scaleFactor) .concatenating(CGAffineTransform(translationX: 0, y: UIScreen.main.bounds.width / 2)) if assetInfo.orientation == .down { let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi)) let windowBounds = UIScreen.main.bounds let yFix = assetTrack.naturalSize.height + windowBounds.height let centerFix = CGAffineTransform(translationX: assetTrack.naturalSize.width, y: yFix) concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor) } instruction.setTransform(concat, at: kCMTimeZero) } return instruction }
This method takes a track and asset, and returns a AVMutableVideoCompositionLayerInstruction which wraps the affine transform needed to get the video right side up. Here’s what’s going on, step-by-step:
You create an AVMutableVideoCompositionLayerInstruction and associate it with your firstTrack.
Next, you create an AVAssetTrack object from your AVAsset. An AVAssetTrack object provides the track-level inspection interface for all assets. You need this object in order to access the preferredTransform and dimensions of the asset.
Then, you save the preferred transform and the amount of scale required to fit the video to the current screen. You’ll use these values in the following steps.
If the video is in portrait, you need to recalculate the scale factor, since the default calculation is for videos in landscape. Then all you need to do is apply the orientation rotation and scale transforms.
If the video is an landscape, there are a similar set of steps to apply the scale and transform. There’s one extra check since the video could have been produced in either landscape left or landscape right. Because there are “two landscapes” the aspect ratio will match but it’s possible the video will be rotated 180 degrees. The extra check for a video orientation of .Down will handle this case.
With the helper methods set up, find merge(_:) and insert the following between sections #2 and #3:
// 2.1 let mainInstruction = AVMutableVideoCompositionInstruction() mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration)) // 2.2 let firstInstruction = VideoHelper.videoCompositionInstruction(firstTrack, asset: firstAsset) firstInstruction.setOpacity(0.0, at: firstAsset.duration) let secondInstruction = VideoHelper.videoCompositionInstruction(secondTrack, asset: secondAsset) // 2.3 mainInstruction.layerInstructions = [firstInstruction, secondInstruction] let mainComposition = AVMutableVideoComposition() mainComposition.instructions = [mainInstruction] mainComposition.frameDuration = CMTimeMake(1, 30) mainComposition.renderSize = CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
First, you set up two separate AVMutableCompositionTrack instances. That means you need to apply an AVMutableVideoCompositionLayerInstruction to each track in order to fix the orientation separately.
2.1: First, you set up mainInstruction to wrap the entire set of instructions. Note that the total time here is the sum of the first asset’s duration and the second asset’s duration.
2.2: Next, you set up the two instructions — one for each asset — using the helper method you defined earlier. The instruction for the first video needs one extra addition: you set its opacity to 0 at the end so it becomes invisible when the second video starts.
2.3: Now that you have your AVMutableVideoCompositionLayerInstruction instances for the first and second tracks, you simply add them to the main AVMutableVideoCompositionInstruction object. Next, you add your mainInstruction object to the instructions property of an instance of AVMutableVideoComposition. You also set the frame rate for the composition to 30 frames/second.
Now that you’ve got an AVMutableVideoComposition object configured, all you need to do is assign it to your exporter. Insert the following code at the end of of section #5 (just before exportAsynchronously()::
exporter.videoComposition = mainComposition
Whew – that’s it!
Build and run your project. If you create a new video by combining two videos (and optionally an audio file), you will see that the orientation issues disappear when you play back the new merged video.
Where to Go From Here?
You can download the final project using the link at the top or bottom of this tutorial.
If you followed along, you should now have a good understanding of how to play video, record video, and merge multiple videos and audio in your apps.
AV Foundation gives you a lot of flexibility when playing around with videos. You can also apply any kind of CGAffineTransform to merge, scale, or position videos.
If you haven’t already done so, I would recommend that you have a look at the WWDC videos on AV Foundation, such as WWDC 2016 session 503 Advanced in AV Foundation Playback. Also, be sure to check out the Apple AV Foundation Framework documentation.
I hope this tutorial has been useful to get you started with video manipulation in iOS. If you have any questions, comments, or suggestions for improvement, please join the forum discussion below!
The post How to Play, Record, and Merge Videos in iOS and Swift appeared first on Ray Wenderlich.
How to Play, Record, and Merge Videos in iOS and Swift published first on https://medium.com/@koresol
0 notes