neatocode
neatocode
NeatoCode Techniques
100 posts
Lance Nanek is a software engineer writing awesome mobile and wearable apps!
Don't wanna be here? Send us removal request.
neatocode · 3 years ago
Text
Scheduling Work in Swift using `NSTimer`
NSTimer can be used to perform scheduled work in Swift. Sometimes it is simpler than other options like GCD (Grand Central Dispatch) or NSOperationQueue, particularly if what you are scheduling needs to run on the main UI thread to access UIView instances anyway. There are also more options coming in the future such as Actors.
Here is an example of using NSTimer running a block of code five times, one second apart:
import Foundation import PlaygroundSupport import UIKit let maxRepeats = 5 var currentRepeats = 0 let blockTimerWith5Runs = Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { timer in currentRepeats += 1 print("Block ran \(currentRepeats) times(s)!") if (currentRepeats >= maxRepeats) { timer.invalidate() } }
In an iOS app, RunLoop.main will already be running. In XCode playgrounds you also need:
RunLoop.main.run(until: Date(timeIntervalSinceNow: 6))
Output:
Block ran 1 times(s)! Block ran 2 times(s)! Block ran 3 times(s)! Block ran 4 times(s)! Block ran 5 times(s)!
It can also be used to run a function:
class FunctionTimerExample { init() { Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(self.peep), userInfo: nil, repeats: false) } @objc func peep() { print("Function ran once!") } } let functionTimerExample = FunctionTimerExample() RunLoop.main.run(until: Date(timeIntervalSinceNow: 2))
Output:
Function ran once!
The function can optionally receive the Timer itself as an argument with some user info attached:
class Counter { var count : Int init(_ count: Int) { self.count = count } } class FunctionTimerWithArgExample { init() { let userInfo = Counter(0) Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(self.peepArg), userInfo: userInfo, repeats: true) } @objc func peepArg(timer: Timer) { guard let userInfo = timer.userInfo as? Counter else { return } userInfo.count += 1 print("Function with arg ran \(userInfo.count) time(s)!") if (userInfo.count >= maxRepeats) { timer.invalidate() } } } let functionTimerWithArgExample = FunctionTimerWithArgExample() RunLoop.main.run(until: Date(timeIntervalSinceNow: 6))
Output:
Function with arg ran 1 time(s)! Function with arg ran 2 time(s)! Function with arg ran 3 time(s)! Function with arg ran 4 time(s)! Function with arg ran 5 time(s)!
An example use case is debouncing a search input
class DebouncedSearchViewController: UIViewController { var textField = UITextField(frame: CGRect(x: 20, y: 20, width: 200, height: 24)) var timer : Timer? override func viewDidLoad() { super.viewDidLoad() view.addSubview(textField) textField.placeholder = "Enter search" textField.backgroundColor = .green self.textField.addTarget( self, action: #selector(self.textFieldDidChange(textField:)), for: .editingChanged); } @objc func textFieldDidChange(textField: UITextField){ print("Text changed: " + textField.text!) timer?.invalidate() timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: false, block: { _ in guard let text = textField.text else { return } print("Submit debounced search query for: \(text)") }) } } let vc = DebouncedSearchViewController() vc.view.frame = CGRect(x: 0, y: 0, width: 300, height: 300) PlaygroundPage.current.needsIndefiniteExecution = true PlaygroundPage.current.liveView = vc.view
Demo of typing `ABC` quickly followed by `Z` after waiting for a second:
Tumblr media
At one company I worked at, we found having a 500ms debounce instead of 200ms reduced the number of network calls, and thus reduced operational costs, without impacting user engagement. So there are debounce values you can use that will save things like cost, network bandwidth, and battery life without hurting user experience.
Warning: If doing some blocking operation like networking or file IO, make sure to start the timer off the main thread, or create a different run loop and add it to that run loop manually. Everything above runs on the main UI thread, so could lock up the user interface on the user if blocking operations were added.
2 notes · View notes
neatocode · 10 years ago
Text
Grayscale and Indexed Color PNG Images on Android
I recently had to improve the quality of heads up navigation images on an embedded Android system and found an interesting solution using uncommon color modes for the images on Android.
The mapping SDK used provided an image like this, 300-500 pixels, full color, with a byte each for red, green, blue, and alpha when decompressed into bitmap form:
Tumblr media
Example image, 16KB
The previous implementation scaled the image down to 30x30 using the Android APIs before sending it to the embedded system and scaling it back up to 100x100 for display:
Tumblr media
Example image, 3KB
This looked horrible, but the small file size was needed so the image could be sent quickly, parsed quickly, and displayed quickly with minimum processing time and battery use. A heads up navigation image that shows up 3 seconds late isn't nearly as useful as one that shows up in half a second.
When suitable, indexed color and grayscale can half the byte size of your image and half it again without impacting quality. These are difficult to generate natively on Android, however, because there are no constants or arguments for the Bitmap and PNG compression utilities for these modes. Fortunately someone wrote up a really good example using the PNGJ library and it works fine on Android.
Here's the indexed color version, 100x100, 2KB:
Tumblr media
And for these signs, grayscale is fine, so grayscale version I wrote, 100x100, 1KB:
Tumblr media
This allows full resolution instead of blocky scaling, but still keeps the size down for quick, low battery usage display. I extracted the code from the Android app and put it in a sample Eclipse Java project here: https://github.com/lnanek/ExampleGenerateLowColorPngImage
Relevant code:
private static int extractLuminance(final int r, final int g, final int b) { return (int) (0.299 * r + 0.587 * g + 0.114 * b); } public static void toGrayscale(final String inputFilename, final String outputFilename, final boolean preserveMetaData) { // Read input final PngReader inputPngReader = new PngReader(new File(inputFilename)); System.out.println("Read input: " + inputPngReader.toString()); // Confirm compatible final int inputChannels = inputPngReader.imgInfo.channels; if (inputChannels < 3 || inputPngReader.imgInfo.bitDepth != 8) { throw new RuntimeException("This method is for RGB8/RGBA8 images"); } // Setup output final ImageInfo outputImageSettings = new ImageInfo( inputPngReader.imgInfo.cols, inputPngReader.imgInfo.rows, 8, false, true, false); final PngWriter outputPngWriter = new PngWriter( new File(outputFilename), outputImageSettings, true); final ImageLineInt outputImageLine = new ImageLineInt( outputImageSettings); // Copy meta data if desired if (preserveMetaData) { outputPngWriter.copyChunksFrom(inputPngReader.getChunksList(), ChunkCopyBehaviour.COPY_ALL_SAFE); } // For each row of input for (int rowIndex = 0; rowIndex < inputPngReader.imgInfo.rows; rowIndex++) { final IImageLine inputImageLine = inputPngReader.readRow(); final int[] scanline = ((ImageLineInt) inputImageLine) .getScanline(); // to save typing // For each column for (int columnIndex = 0; columnIndex < inputPngReader.imgInfo.cols; columnIndex++) { outputImageLine.getScanline()[columnIndex] = extractLuminance( scanline[columnIndex * inputChannels], scanline[columnIndex * inputChannels + 1], scanline[columnIndex * inputChannels] + 2); } outputPngWriter.writeRow(outputImageLine, rowIndex); } inputPngReader.end(); // it's recommended to end the reader first, in // case there are trailing chunks to read outputPngWriter.end(); }
So take each line of the input image, calculate the brightness from the red, green, and blue, then output that to a new image. In the future, if I can adapt the indexed color version to average the colors and get them down to only 16, instead of 256, it may be possible to get an even better result!
3 notes · View notes
neatocode · 10 years ago
Video
youtube
The SimSurgeon app trains you in anatomy using live hand and head tracking thanks to the Intel RealSense camera! This is a demo of the app and development produced for the Intel RealSense App Challenge. Thanks for watching! Good luck to everyone else participating!
2 notes · View notes
neatocode · 10 years ago
Photo
Amusing comments about new smart headwear :)
Tumblr media
Wow. And people said @GoogleGlass looked dorky. Yes, #Sony! I’d like fins on the side of my head. One #SonyAttach!, please! #DontUnderstandSomeTechies
3 notes · View notes
neatocode · 11 years ago
Text
Ad Injection in Chrome by Extension - Finding the Culprit
Sometimes you are hacking away at work and you see an ad popup on a web site that you know just shouldn't be there. Here is an example of an ad appearing on the right on a corporate web site that should be ad free:
Tumblr media
And expanded:
Tumblr media
When something like this happens your computer may have malware installed - software that performs actions other than those the user requested. In my case I was using Chrome on OS X and my McAffee virus scan hadn't found anything, so I knew the likely source was a Chrome extension.
Extensions allow adding functionality to the browser, but sometimes do things you don't want like injecting ads on web pages that shouldn't have them to profit the extension author. Unfortunately, I had dozens of extensions, and Google searches showed this particular malware was frequent in many different extensions.
Removing my least used extensions didn't prevent the ads, but I finally found a find command that could hunt down the culprit. On OS X you can open the Terminal app and run these commands: cd './Library/Application Support/Google/Chrome/Default/Extensions' find find . -type f -exec fgrep superfish {} \; -print
This changes the working directory to where Chrome extensions are stored on Mac OS X then searches all files for the contents "superfish" and prints out the line found and the file. Here is an example:
Tumblr media
This tells me the extension with the ad has ID ppmibgfeefcglejjlpeihfdimbkfbbnm. Now I can go into my Chrome settings and see the culprit in my case was "Change HTTP Request Header" and disable it:
Tumblr media
This saved me from having to remove all my extensions and add them back in one at a time day by day watching for mysterious ads. Google searches showed there are very, very many extensions that suddenly started showing these ad frames using this superfish script, so you don't always know by extension name. There are probably other malware scripts as well, so this find command isn't foolproof, but might help someone else searching for the solution!
2 notes · View notes
neatocode · 11 years ago
Video
youtube
Live Google Glass video of a bicycle ride through rural Missouri going home from work. There are some beautiful skies and trees and water! GoPro eat your heart out.
I used my wife's new model 2GB Glass which is supposed to be more battery efficient as well - it made the trip recording with 37% left after. I had tied it to my head with a couple old hair bungies since I was worried it would slip off. 
Here is a picture from before I put it on my head!
And here is what it looks like wearing it with the helmet at the end of the ride!
Thanks for watching! So what do you think? Can Google Glass replace GoPro?
2 notes · View notes
neatocode · 11 years ago
Video
Rocking! Someone write this!
youtube
Fun “Pulse Fighter” music video ⊟
Whoa, this is really great! I wish Kumazon (Amazon.com for Bears?) was real. The music is all from Japanese chiptune artist Toriena, and the visuals are courtesy of m7kenji. Toriena just put out an album with Madmilky Records called A.I. Complex — you can preview it here. A digital download version of the album is coming soon.
FOLLOW US @TinyCartridge / Facebook
386 notes · View notes
neatocode · 11 years ago
Text
2GB Glass vs. Original Comparison Pics and Stats
My wife recently received the new 2GB Glass as a warranty replacement and I had the opportunity to take some pictures. This version was just announced by Google on Google+ as a way to improve speed and reliability. The most obvious physical difference is that the new version also has a new type of nose pad mounted on a swivel, here it is in comparison to my older Google Glass:
Tumblr media
And here it is alone:
Tumblr media
This should definitely be an improvement because the old nose pads were always falling off and disappearing. My current Glass actually only has one. The new Google Glass also has an FCC mark on the bottom:
Tumblr media
When I originally signed up for Google Glass at Google IO we all had to basically sign on to a human research experiment. So FCC approval is a big step up. Lastly, the memory available is much more. Here is /proc/meminfo for the new unit: MemTotal: 1475828 kB MemFree: 664108 kB Buffers: 8164 kB Cached: 280776 kB SwapCached: 0 kB Active: 321460 kB Inactive: 241100 kB Active(anon): 273972 kB Inactive(anon): 2708 kB Active(file): 47488 kB Inactive(file): 238392 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 996352 kB HighFree: 248880 kB LowTotal: 479476 kB LowFree: 415228 kB SwapTotal: 131068 kB SwapFree: 131068 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 273660 kB Mapped: 326416 kB Shmem: 3080 kB Slab: 29612 kB SReclaimable: 13120 kB SUnreclaim: 16492 kB KernelStack: 7016 kB PageTables: 9992 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 868980 kB Committed_AS: 12293056 kB VmallocTotal: 507904 kB VmallocUsed: 193464 kB VmallocChunk: 181244 kB
And here they are for the older one: MemTotal: 596116 kB MemFree: 36368 kB Buffers: 9168 kB Cached: 140428 kB SwapCached: 15832 kB Active: 187736 kB Inactive: 200780 kB Active(anon): 117384 kB Inactive(anon): 123356 kB Active(file): 70352 kB Inactive(file): 77424 kB Unevictable: 1008 kB Mlocked: 0 kB HighTotal: 106496 kB HighFree: 1416 kB LowTotal: 489620 kB LowFree: 34952 kB SwapTotal: 131068 kB SwapFree: 111276 kB Dirty: 28 kB Writeback: 0 kB AnonPages: 234412 kB Mapped: 228084 kB Shmem: 772 kB Slab: 21896 kB SReclaimable: 8124 kB SUnreclaim: 13772 kB KernelStack: 6312 kB PageTables: 8404 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 429124 kB Committed_AS: 10665796 kB VmallocTotal: 507904 kB VmallocUsed: 186124 kB VmallocChunk: 157700 kB
New /proc/cpuinfo : Processor : ARMv7 Processor rev 3 (v7l) processor : 0 BogoMIPS : 1194.54
processor : 1 BogoMIPS : 1199.54
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x1 CPU part : 0xc09 CPU revision : 3
Hardware : OMAP4430 Revision : 0005 Serial : 0168376606012020
Older /proc/cpuinfo : Processor : ARMv7 Processor rev 3 (v7l) processor : 0 BogoMIPS : 597.27
processor : 1 BogoMIPS : 599.77
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x1 CPU part : 0xc09 CPU revision : 3
Hardware : OMAP4430 Revision : 0003 Serial : 015d984107018012
I know many people I've talked too, particularly AR and game developers, were hoping Google would move to a modern processor. The TI OMAP shipped on smartphones many years ago. Unfortunately we didn't see that with this revision.
For Android developers here is the system properties of the new unit: # begin build properties # autogenerated by buildinfo.sh ro.build.id=XRV70D ro.build.display.id=XRV70D ro.build.version.incremental=1218353 ro.build.version.sdk=19 ro.build.version.codename=REL ro.build.version.release=4.4.2 ro.build.date=Mon Jun 9 22:36:33 UTC 2014 ro.build.date.utc=1402353393 ro.build.type=user ro.build.user=android-build ro.build.host=kpfj1.cbf.corp.google.com ro.build.tags=release-keys ro.product.model=Glass 1 ro.product.brand=Google ro.product.name=glass_1 ro.product.device=glass-1 ro.product.board=glass_1 ro.product.cpu.abi=armeabi-v7a ro.product.cpu.abi2=armeabi ro.product.manufacturer=Google ro.product.locale.language=en ro.product.locale.region=US ro.wifi.channels= ro.board.platform=omap4 # ro.build.product is obsolete; use ro.product.device ro.build.product=glass-1 # Do not try to parse ro.build.description or .fingerprint ro.build.description=glass_1-user 4.4.2 XRV70D 1218353 release-keys ro.build.fingerprint=Google/glass_1/glass-1:4.4.2/XRV70D/1218353:user/release-keys ro.build.characteristics=default # end build properties # # from device/glass/glass-1/system.prop # wifi.interface=wlan0 com.ti.omap_enhancement=true ro.bq.gpu_to_cpu_unsupported=1
# # ADDITIONAL_BUILD_PROPERTIES # drm.service.enabled=false glass.gestureservice.start=1 persist.sys.usb.config=ptp ro.com.android.dateformat=MM-dd-yyyy ro.build.version.glass=XE18.1 ro.build.version.minor.glass=RC05 ro.error.receiver.system.apps=com.google.glass.logging wifi.interface=wlan0 wifi.supplicant_scan_interval=60 bluetooth.enable_timeout_ms=10000 hwui.text_gamma=4 persist.sys.forced_orientation=0 ro.hwui.disable_scissor_opt=true ro.hwui.drop_shadow_cache_size=2 ro.hwui.gradient_cache_size=0.5 ro.hwui.layer_cache_size=5 ro.hwui.patch_cache_size=64 ro.hwui.path_cache_size=3 ro.hwui.r_buffer_cache_size=3 ro.hwui.text_large_cache_height=512 ro.hwui.text_large_cache_width=2048 ro.hwui.text_small_cache_height=256 ro.hwui.text_small_cache_width=1024 ro.hwui.texture_cache_flushrate=0.4 ro.hwui.texture_cache_size=16 ro.opengles.version=131072 ro.sf.lcd_density=240 dalvik.vm.heapgrowthlimit=72m dalvik.vm.heapmaxfree=2m dalvik.vm.heapminfree=512k dalvik.vm.heapsize=192m dalvik.vm.heapstartsize=5m dalvik.vm.heaptargetutilization=0.75 dalvik.vm.jit.codecachesize=0 persist.sys.dalvik.vm.lib=libdvm.so dalvik.vm.dexopt-flags=m=y net.bt.name=Android dalvik.vm.stack-trace-file=/data/anr/traces.txt
And of the older one: # begin build properties # autogenerated by buildinfo.sh ro.build.id=XRV72 ro.build.display.id=XRV72 ro.build.version.incremental=1223935 ro.build.version.sdk=19 ro.build.version.codename=REL ro.build.version.release=4.4.2 ro.build.date=Thu Jun 12 03:02:32 UTC 2014 ro.build.date.utc=1402542152 ro.build.type=user ro.build.user=android-build ro.build.host=wped21.hot.corp.google.com ro.build.tags=release-keys ro.product.model=Glass 1 ro.product.brand=Google ro.product.name=glass_1 ro.product.device=glass-1 ro.product.board=glass_1 ro.product.cpu.abi=armeabi-v7a ro.product.cpu.abi2=armeabi ro.product.manufacturer=Google ro.product.locale.language=en ro.product.locale.region=US ro.wifi.channels= ro.board.platform=omap4 # ro.build.product is obsolete; use ro.product.device ro.build.product=glass-1 # Do not try to parse ro.build.description or .fingerprint ro.build.description=glass_1-user 4.4.2 XRV72 1223935 release-keys ro.build.fingerprint=Google/glass_1/glass-1:4.4.2/XRV72/1223935:user/release-keys ro.build.characteristics=default # end build properties # # from device/glass/glass-1/system.prop # wifi.interface=wlan0 com.ti.omap_enhancement=true ro.bq.gpu_to_cpu_unsupported=1
# # ADDITIONAL_BUILD_PROPERTIES # drm.service.enabled=false glass.gestureservice.start=1 persist.sys.usb.config=ptp ro.com.android.dateformat=MM-dd-yyyy ro.build.version.glass=XE18.11 ro.build.version.minor.glass=RC06 ro.error.receiver.system.apps=com.google.glass.logging wifi.interface=wlan0 wifi.supplicant_scan_interval=60 bluetooth.enable_timeout_ms=10000 hwui.text_gamma=4 persist.sys.forced_orientation=0 ro.hwui.disable_scissor_opt=true ro.hwui.drop_shadow_cache_size=2 ro.hwui.gradient_cache_size=0.5 ro.hwui.layer_cache_size=5 ro.hwui.patch_cache_size=64 ro.hwui.path_cache_size=3 ro.hwui.r_buffer_cache_size=3 ro.hwui.text_large_cache_height=512 ro.hwui.text_large_cache_width=2048 ro.hwui.text_small_cache_height=256 ro.hwui.text_small_cache_width=1024 ro.hwui.texture_cache_flushrate=0.4 ro.hwui.texture_cache_size=16 ro.opengles.version=131072 ro.sf.lcd_density=240 dalvik.vm.heapgrowthlimit=72m dalvik.vm.heapmaxfree=2m dalvik.vm.heapminfree=512k dalvik.vm.heapsize=192m dalvik.vm.heapstartsize=5m dalvik.vm.heaptargetutilization=0.75 dalvik.vm.jit.codecachesize=0 persist.sys.dalvik.vm.lib=libdvm.so dalvik.vm.dexopt-flags=m=y net.bt.name=Android dalvik.vm.stack-trace-file=/data/anr/traces.txt
Thanks for reading! Hope this helps anyone considering buying an upgrade.
Hopefully this version goes better!
3 notes · View notes
neatocode · 11 years ago
Text
Auto Chat Sticker: Foreground Extraction using the Dual Lens HTC One M8
HTC recently released a new version of the HTC One, called the M8, with dual lenses on the back that allows lots of interesting uses of the produced depth data - for example quick and easy foreground extraction.
You can start with the DualLensDemo included in their public API: https://www.htcdev.com/devcenter/opensense-sdk/htc-dual-lens-api
This examples produces a depth map where the color changes based on the distance. Here is a screenshot of what it looks like:
Tumblr media
Here is the code to draw the depth visualization:
DualLens.Holder<byte[]> buf = mDualLens.new Holder<byte[]>(); DualLens.DataInfo datainfo = mDualLens.getStrengthMap(buf); int [] depthData = new int[datainfo.width * datainfo.height]; int leftByte; for(int i = 0; i < datainfo.width * datainfo.height; i++) {     leftByte = buf.value[i] & 0x000000ff;     depthData[i] = mColorBar[leftByte*500]; } Bitmap bmp = Bitmap.createBitmap( depthData, datainfo.width, datainfo.height, Config.ARGB_8888); image.setImageBitmap(bmp); image.setBackgroundColor(Color.WHITE);
You can keep a reference to the original image bitmap and then either pull colors from it or white out pixels based on the depth like this:
DualLens.Holder<byte[]> buf = mDualLens.new Holder<byte[]>(); DualLens.DataInfo datainfo = mDualLens.getStrengthMap(buf); int [] outputImage = new int[datainfo.width * datainfo.height]; int pixelDepth; for(int i = 0; i < datainfo.width * datainfo.height; i++) {     pixelDepth = buf.value[i] & 0x000000ff;     int depthX = i % datainfo.width;     int depthY = i / datainfo.width;     int originalX = originalImageBitmap.getWidth() * depthX / datainfo.width;     int originalY = originalImageBitmap.getHeight() * depthY / datainfo.height;     //White out background, pick original color from foreground.     outputImage[i] = pixelDepth > 64 ? Color.WHITE :         originalImageBitmap.getPixel(originalX, originalY); } Bitmap bmp = Bitmap.createBitmap( outputImage, datainfo.width, datainfo.height, Config.ARGB_8888); image.setImageBitmap(bmp); image.setBackgroundColor(Color.WHITE);
Source code on GitHub. Here is a screenshot with the modifications:
Tumblr media
Boom! Instant chat stickers just like are zooming all over the place in hit communications apps like Line and Facebook Messager. Foreground extraction is also very useful for making product images for user marketplace apps.
Traditionally it has been very labor intensive and in real graphics and retail industries interns and other employees just starting out get tasked with having to carefully edit photos or use more complex imaging setups than we have on consumer phones.
So using the M8 Dual Lens API to make the results better and with no effort is really exciting! They also offer other ways to get data, like OpenGL geometries.
1 note · View note
neatocode · 11 years ago
Link
Wow, really detailed, honest, and interesting look at what it is like to run an app startup off ad revenue:
Dear Locket users,
For those of you that signed up to earn cash, I’m regretful that we had to discontinue the pay-per-swipe service as of January 1st, 2014. We are now offering gift card options to redeem your remaining balance. The option is available inside the Locket app.
Below is an...
6 notes · View notes
neatocode · 12 years ago
Note
Hello. How can I stream a live feed from a computer to google glass? Tks.
Hello, the most often used solution for this is to setup a Google Hangout between the Glass and the computer.
0 notes
neatocode · 12 years ago
Note
Thanks for the great website, I had a quick question. I look for a method to stream google glass camera over wifi to my pc, and vice versa (stream video glass) are you aware of any methods of achieve this through a local wifi?
If you are a developer, some people have the web RTC source code working on Glass:
https://code.google.com/p/webrtc/issues/detail?id=2083
https://code.google.com/p/webrtc/issues/detail?id=2561
That would enable WiFi video calling. I don't know of any good apps for that yet. It does usually take code changes to get an normal Android video app to work, even if you have root and a Bluetooth keyboard paired.
0 notes
neatocode · 12 years ago
Note
Hello, we created a glass app with custom menu using mirror API. But when we click the menu it shows a synchronization icon over timeline item and the timeline becomes first position of the app but cannot get the menu's click event from notification servlet. Tumblr doesn't allow to post URLs so i can send the links. If you can advice us on this, would really appreciate.
Hello, this often happens when developing with your server on localhost where the Google Mirror API servers can't contact your server. Try to just deploying to a free appspot.com instance instead, it's easier than setting up all the strange proxy solutions people have.
0 notes
neatocode · 12 years ago
Text
Panning UI via Head Movements in Google Glass
The display of a wearable computer can be extended using the technique of measuring user head or eye movements and panning the display. This is an example of many techniques that will have to be developed in more depth to overcome the issues with limited display and input options on wearable devices. I show off the sample code below in action, and the built-in Google Glass photo sphere and browser panning support in this video:
Google Glass additionally uses head movements to scroll the list of voice commands after you use the "OK Glass" command:
Tumblr media Tumblr media
Another place you may have seen it used is in my Through The Walls AR hack where you can look around and have the display scroll markers across the screen indicating where distant out of sight things like traffic cameras are.
So what happens is the user sees one part of the image:
Tumblr media Tumblr media
Moves their head:
Tumblr media
Then sees another part:
Tumblr media Tumblr media
The code I used for the demo is available on GitHub as GyroImageView. I tried to make a simple as possible example so just scroll an image of the Golden Gate bridge. Here is the code that sets up the scrolling image using the GestureImageView library: 
setContentView(R.layout.view_activity); image = (GestureImageView) findViewById(R.id.image); moveAnimation = new MoveAnimation(); moveAnimation.setAnimationTimeMS(ANIMATION_DURATION_MS); moveAnimation.setMoveAnimationListener(new MoveAnimationListener() { @Override public void onMove(final float x, final float y) { image.setPosition(x, y); image.redraw(); } });
Here is where the sensor fusion adjusted gyro measurements are used to scroll the view using the Sensor Fusion orientation tracking method:
// On gyro motion, start an animated scroll that direction. @Override public void onUpdate(float[] aGyro, float[] aGyroSum) { final float yGyro = aGyro[1]; final float deltaX = GYRO_TO_X_PIXEL_DELTA_MULTIPLIER * yGyro; animateTo(deltaX); } // Animate to a given offset, stopping at the image edges. private void animateTo(final float animationOffsetX) { float nextX = image.getImageX() + animationOffsetX; final int maxWidth = image.getScaledWidth(); final int leftBoundary = (-maxWidth / 2) + image.getDisplayWidth(); final int rightBoundary = (maxWidth / 2); if ( nextX < leftBoundary ) {   nextX = leftBoundary; } else if ( nextX > rightBoundary ) {   nextX = rightBoundary; } moveAnimation.reset(); moveAnimation.setTargetX(nextX); moveAnimation.setTargetY(image.getImageY()); image.animationStart(moveAnimation); }
An animation is used to help smooth the scrolling of the UI. It spreads the movement out overtime, and is reset if a new reading comes in. Without animation, the UI tended to jitter back and forth a lot. You can adjust the GYRO_TO_X_PIXEL_DELTA_MULTIPLIER and ANIMATION_DURATION_MS constants in the class to pan the UI more based on the detected motion, or to take more or less animation time to show the result.
Sensor Fusion is used to address the problems of orientation tracking using Android sensors. If the orientation is tracked using the accelerometer and magnetic compass, the signal is very noisy. If the orientation is tracked using the gyro, turning your head left and then turning your head right back to where you started may not register as the same location, a problem called gyro drift.
What are your opinions on this neat technique?
-Lance
Interested in working on cutting edge Android and Glass apps at startups and multi-billion dollar companies? New Frontier Nomads is hiring!
2 notes · View notes
neatocode · 12 years ago
Note
Hi, I hope you are doing well. I am impressed by your work. The 'Vein Overlay' app looks amazing. I am also looking to start the Google Glass app development. Can you please specify me the tools to use for the development and what programming languages. Thank You.
Thanks! The vein overlays ( http://neatocode.tumblr.com/post/53113748050/super-human-vision-google-glass ) were done using Java and the Android Developer kit:
http://developer.android.com/sdk/index.html
Java is the language used to write Android apps. Google recently released some similar demos for writing Java Android Apps for Google Glass here:
https://developers.google.com/glass/gdk
0 notes
neatocode · 12 years ago
Note
Hi, great glassware app for the autism hackathon. I'm curious how you are getting the display mirrored to your desktop for screencasting.
Thanks David, I wrote up how to mirror the display here:
http://neatocode.tumblr.com/post/49566072064/mirroring-google-glass
It basically uses the Android developer tools' ability to take a screenshot and does it as fast as it can for a somewhat decent video feed.
0 notes
neatocode · 12 years ago
Text
Realtime Voice Analysis with Google Glass
Twillio's recent Autism Hackathon inspired me and several others to try to harness real time sensor analysis, including voice, to help people with autism be more self sufficient. We used Google's new wearable computer, Google Glass. Staff from the Autism Speaks organization, and another gentleman who works full time providing life training for people with Autism Spectrum Disorder, said our hack would be great for mock interview training! Here's how it works:
We built on top of an excellent open source sound analysis app called Audalyzer. Here's what the basic Audalyzer screen looks like running on Google Glass with a whistle at a certain tone in the background:
Tumblr media
Screens we added then help someone keep their voice calm and level during an interview, comparing the current loudest pitch to the average:
Tumblr media Tumblr media
We also used other sensors. For example, the orientation sensor is used to track gaze level. Here is looking at your feet:
Tumblr media
Here is a level gaze:
Tumblr media
A scoring system rewards users who get things right and ace their interview!
Tumblr media
The code is all open source on Github. Most of the new code is in the mymonitor package with some ugly hooks into the main app due to time constraints at the hackathon. Code is broken into separate analyzers that can warn the user about behavior. For example, here's a simple analyzer about speaking too loud:
public class TooLoudAnalyzer implements Analyzer { private static final float TOO_LOUD = -20f; private static TooLoudAnalyzer instance = new TooLoudAnalyzer(); private Float currentVolume; public static synchronized TooLoudAnalyzer getInstance() { return instance; } @Override public synchronized String getLabel() { final Boolean okLoudness = isConditionGood(); return null == okLoudness ? "Measuring loudness..." : okLoudness ? ("Good Job Lance!\n(" + Math.round(currentVolume) + "dB vs " + Math.round(TOO_LOUD) + "dB)") : ("Please keep your voice down.\n(" + Math.round(currentVolume) + "dB vs " + Math.round(TOO_LOUD) + "dB)"); } @Override public synchronized Boolean isConditionGood() { if (null == currentVolume) { return null; } return currentVolume < TOO_LOUD; } public synchronized void setPower(float power) { if (Float.isInfinite(power) || Float.isNaN(power)) { return; } currentVolume = power; } @Override public synchronized Integer getDrawable() { return R.drawable.loud_card; }
Thanks for checking out our team's hack! This seems to be a really interesting area to work in! Rock Health recently funded a company called Spire, which helps lower stress and make people more productive by measuring breathing patterns and letting people train themselves to be calmer. Measuring voice has similar interesting propositions, not just for keeping your own voice calm, but also for detecting stress or pain levels in a conversation partner or patient's voice.
-Lance
Interested in working on cutting edge Android and Glass apps at startups and multi-billion dollar companies? New Frontier Nomads is hiring! Check out our job listing.
2 notes · View notes