basile869
basile869
basile869
1 post
Don't wanna be here? Send us removal request.
basile869 · 6 years ago
Text
IPhone XS phones and the complexity of programming codes
Tumblr media
The new iPhone XS devices are the most complex and sophisticated phones ever produced by Apple. As far as pure technology is concerned, it is known that the Bionic A12 processor is the first 7-nanometer consumer chip commercially available with 6.9 billion transistors. It has an eight-core CPU capable of generating five trillion operations per second (compared to the last 600 billion A11 bionics per second). That kind of computing power that's in your pocket is something that developers have always dreamed of, and we're not even talking about what it means for AI and ML engines within the application. There are many other technical suggestions to reinforce the fact that it is a complex animal of a telephone. Things like the GPU, Neural Engine, 5G and Gigabit LTE-capable chips, camera and screen resolutions! For native app developers, this level of hardware is matched with new exposed calls that we can use in our code - although this is inherently complex, we have to ask whether this level of complexity and performance can be used in low-code environments. Recently we spoke with Ryan Duguid, Chief Evangelist of Nintex, to understand the complexity of programming on such a complex chip as the Bionic A12 and find out if the low-code has found its correspondence or not. ADM: the new A12 bionic chip in the latest iPhone is extremely complex and intelligent. Do you think the complexity of the new chip has passed the simple programming with low code instructions? Duguid: Not even from a distance. At the end of the day, the goal of our entire industry is to push the limits of what is possible with technology, to constantly innovate, to exploit greater computing power, memory and data transfer rates. At the same time, Nintex's goal is to overcome the limits of what is possible without using the code. Why? We firmly believe that companies can truly become digital by tackling all major and minor problems. This means putting the tools in the hands of people who do not know how to write code or developers looking for the most efficient way to solve a problem. Therefore, we must focus our efforts on advancing artificial intelligence, machine learning, blockchain and other emerging technologies, and find out what it looks like when it is made available in an environment with little or no code. For example, we have already made it possible for our customers to leverage the sensitivity of Azure's cognitive services to bring information into customer service improvement processes or the Google Vision API to help field staff choose defective devices identify which needs to be fixed. ADM: the low-code programming range is limited to the platform to which it is connected. What challenges do low-code platforms have to face when AI and ML have just begun when connecting to AI systems? Duguid: If I had given it to me in 1995, I would have agreed with you, but I think you would be pleasantly surprised at how low-code or visual programming platforms have arrived. To this end, I argue that the scale of these platforms is impressive and is improving as it integrates with the ever-growing range of SaaS platforms. The reason for speeding up this work is the wide acceptance of standards such as REST, JSON and OpenAPI and, fortunately, the main suppliers of AI and ML services comply with these standards. That is, even with the standards in force, one of the greatest challenges of these advanced services is the refinement and dynamism of the service they provide. For example, the Google Vision API offers a powerful set of image analysis functions, including optical character recognition (OCR), handwriting recognition, logo recognition, product search, face recognition, recognition of reference and general attributes of the image. Depending on what you ask the API for informing you of an image, your results will vary considerably. For example, a face search can return a collection of faces, their X, Y coordinates in the image, the position of important facial features such as eyes, eyebrows, mouths, etc., whether the person is happy or sad, whether he's wearing a hat ... call him. If you make this type of functionality available with minimum or no code, you must keep this in mind during design. Microsoft recently purchased Lobe to make AI programming available to everyone. What use do the
best mobile developers
have here? Duguid: Like all visual or low-code programming platforms, Lobe is designed to make some aspects of software delivery faster, easier to manage and in less expensive cases. In this case, Lobe is designed so that mortals have the ability to integrate advanced image and audio analysis capabilities into their apps. This is an advantage, as this type of functionality is usually reserved for high-level professional developers. At the same time, it is a common misconception that low-code platforms or non-code platforms are meant for non-developers, and this is simply not the case. At the end of the day, professional developers can also benefit from this type of skills while performing the task of carrying out activities that are already well understood, so that they can focus on the problems that are unique to each solution. Delivered delivery In addition to accelerating the development of the original solution, the final product is also much more maintainable, providing a level of agility much more difficult to obtain with the custom code.
6 notes · View notes