#Uber Eats API Data Sets
Explore tagged Tumblr posts
Text
Decoding Uber and Eats API Data Sets: Unleashing Their Potential and Applications
In the fast-paced world of technology and on-demand services, Uber and Uber Eats have become household names, revolutionizing the way we commute and dine. Behind the scenes, the treasure trove of data generated by these platforms holds immense potential for developers and businesses. In this blog post, we will delve into the intricacies of decoding Uber and Uber Eats API data sets, exploring their untapped potential and the diverse applications they can fuel.
Understanding Uber API Data:
The Uber API provides developers with access to a wealth of data, ranging from real-time ride information to historical trip data. By tapping into this API, developers can gain insights into user behavior, traffic patterns, and even explore possibilities for optimizing transportation services.
One key aspect of Uber API data is its geospatial nature. The platform's data sets include detailed information about pick-up and drop-off locations, allowing developers to analyze trends, identify popular routes, and optimize driver allocation. This information is invaluable for urban planning, traffic management, and the development of smart cities.
Moreover, the Uber API enables developers to access data on driver ratings, trip duration, and fare breakdowns. Businesses can leverage this information to enhance user experiences, tailor marketing strategies, and optimize pricing models.
Unleashing the Power of Uber Eats API Data:
Moving from rides to meals, Uber Eats API data opens up a whole new realm of possibilities. The API provides insights into customer preferences, restaurant performance, and delivery logistics. Developers can harness this data to create innovative solutions for the food industry.
For instance, by analyzing order histories and customer reviews, businesses can tailor their menus, optimize delivery times, and enhance overall customer satisfaction. This level of data-driven decision-making can give restaurants a competitive edge in a crowded market.
Additionally, the Uber Eats API allows developers to integrate delivery services into their own platforms. This opens up avenues for collaboration between food-related businesses and facilitates seamless order and delivery processes.
Applications of Decoded Uber and Uber Eats API Data:
Urban Planning and Traffic Management:
Utilize geospatial data to understand traffic patterns and optimize transportation routes.
Plan infrastructure developments based on popular pick-up and drop-off locations.
Personalized User Experiences:
Leverage user and driver ratings to enhance the matching algorithm for a more personalized service.
Tailor marketing efforts based on user preferences and behaviors.
Optimizing Restaurant Operations:
Analyze order histories to optimize inventory and menu offerings.
Use delivery data to improve logistics and reduce delivery times.
Collaborative Business Solutions:
Enable third-party platforms to integrate Uber Eats delivery services, expanding the reach of restaurants.
Foster collaborations between businesses based on shared data insights.
Conclusion:
Decoding Uber and Uber Eats API data sets unlocks a world of possibilities for developers, businesses, and city planners. The geospatial, user behavior, and transactional data offer valuable insights that can drive innovation across various industries. As technology continues to evolve, harnessing the power of API data will play a pivotal role in shaping the future of transportation, food delivery, and beyond. It's not just about decoding data; it's about unleashing its full potential for a smarter, more efficient, and user-centric world.
0 notes
Text
Monitor Competitor Pricing with Food Delivery Data Scraping
In the highly competitive food delivery industry, pricing can be the deciding factor between winning and losing a customer. With the rise of aggregators like DoorDash, Uber Eats, Zomato, Swiggy, and Grubhub, users can compare restaurant options, menus, and—most importantly—prices in just a few taps. To stay ahead, food delivery businesses must continually monitor how competitors are pricing similar items. And that’s where food delivery data scraping comes in.
Data scraping enables restaurants, cloud kitchens, and food delivery platforms to gather real-time competitor data, analyze market trends, and adjust strategies proactively. In this blog, we’ll explore how to use web scraping to monitor competitor pricing effectively, the benefits it offers, and how to do it legally and efficiently.
What Is Food Delivery Data Scraping?
Data scraping is the automated process of extracting information from websites. In the food delivery sector, this means using tools or scripts to collect data from food delivery platforms, restaurant listings, and menu pages.
What Can Be Scraped?
Menu items and categories
Product pricing
Delivery fees and taxes
Discounts and special offers
Restaurant ratings and reviews
Delivery times and availability
This data is invaluable for competitive benchmarking and dynamic pricing strategies.
Why Monitoring Competitor Pricing Matters
1. Stay Competitive in Real Time
Consumers often choose based on pricing. If your competitor offers a similar dish for less, you may lose the order. Monitoring competitor prices lets you react quickly to price changes and stay attractive to customers.
2. Optimize Your Menu Strategy
Scraped data helps identify:
Popular food items in your category
Price points that perform best
How competitors bundle or upsell meals
This allows for smarter decisions around menu engineering and profit margin optimization.
3. Understand Regional Pricing Trends
If you operate across multiple locations or cities, scraping competitor data gives insights into:
Area-specific pricing
Demand-based variation
Local promotions and discounts
This enables geo-targeted pricing strategies.
4. Identify Gaps in the Market
Maybe no competitor offers free delivery during weekdays or a combo meal under $10. Real-time data helps spot such gaps and create offers that attract value-driven users.
How Food Delivery Data Scraping Works
Step 1: Choose Your Target Platforms
Most scraping projects start with identifying where your competitors are listed. Common targets include:
Aggregators: Uber Eats, Zomato, DoorDash, Grubhub
Direct restaurant websites
POS platforms (where available)
Step 2: Define What You Want to Track
Set scraping goals. For pricing, track:
Base prices of dishes
Add-ons and customization costs
Time-sensitive deals
Delivery fees by location or vendor
Step 3: Use Web Scraping Tools or Custom Scripts
You can either:
Use scraping tools like Octoparse, ParseHub, Apify, or
Build custom scripts in Python using libraries like BeautifulSoup, Selenium, or Scrapy
These tools automate the extraction of relevant data and organize it in a structured format (CSV, Excel, or database).
Step 4: Automate Scheduling and Alerts
Set scraping intervals (daily, hourly, weekly) and create alerts for major pricing changes. This ensures your team is always equipped with the latest data.
Step 5: Analyze the Data
Feed the scraped data into BI tools like Power BI, Google Data Studio, or Tableau to identify patterns and inform strategic decisions.
Tools and Technologies for Effective Scraping
Popular Tools:
Scrapy: Python-based framework perfect for complex projects
BeautifulSoup: Great for parsing HTML and small-scale tasks
Selenium: Ideal for scraping dynamic pages with JavaScript
Octoparse: No-code solution with scheduling and cloud support
Apify: Advanced, scalable platform with ready-to-use APIs
Hosting and Automation:
Use cron jobs or task schedulers for automation
Store data on cloud databases like AWS RDS, MongoDB Atlas, or Google BigQuery
Legal Considerations: Is It Ethical to Scrape Food Delivery Platforms?
This is a critical aspect of scraping.
Understand Platform Terms
Many websites explicitly state in their Terms of Service that scraping is not allowed. Scraping such platforms can violate those terms, even if it’s not technically illegal.
Avoid Harming Website Performance
Always scrape responsibly:
Use rate limiting to avoid overloading servers
Respect robots.txt files
Avoid scraping login-protected or personal user data
Use Publicly Available Data
Stick to scraping data that’s:
Publicly accessible
Not behind paywalls or logins
Not personally identifiable or sensitive
If possible, work with third-party data providers who have pre-approved partnerships or APIs.
Real-World Use Cases of Price Monitoring via Scraping
A. Cloud Kitchens
A cloud kitchen operating in three cities uses scraping to monitor average pricing for biryani and wraps. Based on competitor pricing, they adjust their bundle offers and introduce combo meals—boosting order value by 22%.
B. Local Restaurants
A family-owned restaurant tracks rival pricing and delivery fees during weekends. By offering a free dessert on orders above $25 (when competitors don’t), they see a 15% increase in weekend orders.
C. Food Delivery Startups
A new delivery aggregator monitors established players’ pricing to craft a price-beating strategy, helping them enter the market with aggressive discounts and gain traction.
Key Metrics to Track Through Price Scraping
When setting up your monitoring dashboard, focus on:
Average price per cuisine category
Price differences across cities or neighborhoods
Top 10 lowest/highest priced items in your segment
Frequency of discounts and offers
Delivery fee trends by time and distance
Most used upsell combinations (e.g., sides, drinks)
Challenges in Food Delivery Data Scraping (And Solutions)
Challenge 1: Dynamic Content and JavaScript-Heavy Pages
Solution: Use headless browsers like Selenium or platforms like Puppeteer to scrape rendered content.
Challenge 2: IP Blocking or Captchas
Solution: Rotate IPs with proxies, use CAPTCHA-solving tools, or throttle request rates.
Challenge 3: Frequent Site Layout Changes
Solution: Use XPaths and CSS selectors dynamically, and monitor script performance regularly.
Challenge 4: Keeping Data Fresh
Solution: Schedule automated scraping and build change detection algorithms to prioritize meaningful updates.
Final Thoughts
In today’s digital-first food delivery market, being reactive is no longer enough. Real-time competitor pricing insights are essential to survive and thrive. Data scraping gives you the tools to make informed, timely decisions about your pricing, promotions, and product offerings.
Whether you're a single-location restaurant, an expanding cloud kitchen, or a new delivery platform, food delivery data scraping can help you gain a critical competitive edge. But it must be done ethically, securely, and with the right technologies.
0 notes
Text
Restaurant Point of Sale Terminal Market Expansion Strategies and Growth Opportunities to 2033
Restaurant Point of Sale (POS) Terminal Market: Industry Trends and Forecast to 2032
The global Restaurant Point of Sale (POS) Terminal Market is undergoing a transformative evolution, driven by technological advancements, changing customer preferences, and the restaurant industry's growing emphasis on operational efficiency and enhanced customer experiences. As restaurants embrace digital ordering, contactless payments, loyalty program integration, and data-driven insights, POS terminals have become indispensable tools — far beyond simple payment processors.
The Restaurant POS Terminal Market, valued at approximately USD 17.2 billion in 2023, is projected to reach USD 38.4 billion by 2032, expanding at a compound annual growth rate (CAGR) of 9.3%. This impressive growth trajectory reflects both the market’s dynamic nature and the broader shift toward digitization across the food service industry.
Download a Free Sample Report:-https://tinyurl.com/2k4hatuv
Understanding Restaurant POS Terminals
A Point-of-Sale (POS) terminal in a restaurant setting acts as the central hub for managing orders, processing payments, coordinating kitchen workflows, tracking inventory, and even monitoring sales performance in real time. Modern POS systems can be hardware-based (stationary or mobile) or fully software-based (cloud-hosted), and they increasingly integrate with third-party services, such as delivery apps, online reservations, and CRM systems.
Key Market Drivers
1. Demand for Enhanced Customer Experience
Consumer expectations in the restaurant industry have changed dramatically in recent years. Diners want:
quicker service,
multiple payment options,
personalized experiences,
loyalty rewards, and
seamless online-to-offline ordering.
POS systems now play a key role in enabling these experiences by integrating with kitchen display systems (KDS), payment gateways, reservation systems, and feedback tools.
2. Surge in Contactless Payments
The rise of contactless and mobile payments, accelerated by the COVID-19 pandemic, has significantly boosted POS terminal upgrades across restaurants worldwide. NFC-enabled terminals, QR code payment systems, and digital wallets (Apple Pay, Google Pay, Samsung Pay) are becoming table stakes in both quick-service and fine-dining environments.
3. Cloud-based POS Systems
The market has witnessed a strong shift from traditional on-premise systems toward cloud-based POS platforms. Cloud systems offer:
real-time data access,
centralized reporting for multi-location businesses,
easy software updates,
enhanced cybersecurity,
reduced upfront hardware investment.
This trend is especially favorable for small and medium-sized restaurants that want scalable, subscription-based solutions without the complexity of traditional IT infrastructure.
4. Integration with Online Ordering & Delivery
The explosive growth of third-party delivery apps (Uber Eats, DoorDash, Grubhub, Zomato, Swiggy) and in-house ordering platforms has forced restaurants to seek POS solutions that offer seamless API integrations.
Modern POS systems help restaurants synchronize in-house dining, takeaway, and delivery orders in real time, reducing human errors and improving kitchen coordination.
5. Data Analytics & Business Intelligence
Beyond order and payment processing, restaurant POS terminals have become a key source of business intelligence. Data-driven features include:
sales forecasting,
inventory optimization,
staff performance monitoring,
predictive analytics for menu engineering.
These features help owners and managers make informed decisions, improving both profitability and operational efficiency.
Market Restraints
Despite robust growth, a few challenges continue to affect the market:
High Initial Costs: Advanced POS hardware, cloud subscriptions, and integration with third-party platforms can require significant upfront investment, especially for small independent restaurants.
Data Security Concerns: Restaurants handling sensitive payment data are increasingly targeted by cyberattacks. Compliance with PCI DSS and GDPR regulations is necessary but can be resource-intensive.
Connectivity Dependency: Cloud-based POS systems rely on stable internet connections. In regions with poor infrastructure, adoption remains limited.
Segment Analysis
By Product Type:
Fixed POS Terminals: Typically used in full-service restaurants (FSRs) where orders are placed at the table or counter. These systems often include touchscreen monitors, barcode scanners, receipt printers, and payment terminals.
Mobile POS Terminals (mPOS): Increasingly used by food trucks, cafés, small quick-service restaurants (QSRs), and pop-up kitchens, mPOS devices offer greater flexibility for order taking and payment processing.
By Component:
Hardware: Terminals, printers, handheld tablets, card readers, and kitchen display systems.
Software: Inventory management, employee scheduling, order routing, reporting, CRM, and loyalty program management.
Services: Installation, maintenance, cloud hosting, technical support, and software upgrades.
By End-User:
Quick-Service Restaurants (QSRs): Focused on speed and efficiency, heavily reliant on drive-thru POS, order queue management, and self-service kiosks.
Full-Service Restaurants (FSRs): Emphasize seamless table service and often integrate reservation and customer feedback systems.
Cafés & Bars: Prioritize split billing, customer loyalty programs, and fast drink order turnaround.
Catering Services & Cloud Kitchens: Depend on centralized POS systems integrated with logistics and inventory management tools.
Regional Analysis
North America
The U.S. and Canada represent a mature POS market, but the demand for upgrades and cloud-based systems remains high. Features such as AI-powered sales predictions, real-time performance dashboards, and advanced loyalty program integrations are gaining traction.
Europe
Driven by regulations like PSD2 and strong consumer data protection under GDPR, European restaurants prioritize secure and compliant payment systems. Contactless and mobile payments have reached high penetration rates, especially in the UK, Germany, France, and Scandinavia.
Asia-Pacific
The Asia-Pacific region is expected to exhibit the highest growth rate through 2032, thanks to:
the proliferation of small and mid-sized food businesses,
rapid urbanization,
government initiatives encouraging digital payments,
and the growing culture of food delivery and takeout.
China, India, Japan, and Southeast Asia are the primary growth hubs.
Industry Trends
1. AI and Machine Learning Integration
Artificial intelligence is enhancing POS systems' ability to predict customer preferences, optimize inventory, suggest upsells, and automate menu adjustments based on sales trends and seasonality.
2. Self-Service Kiosks
Especially popular in QSR chains like McDonald’s and Subway, self-service POS terminals reduce queue lengths, improve order accuracy, and enable customized ordering experiences.
3. Voice-Enabled POS
Voice technology is beginning to make its way into restaurant POS systems, enabling staff to place and adjust orders using voice commands — especially helpful in high-traffic kitchens.
4. Blockchain for Secure Payments
Though still emerging, blockchain-based payment solutions and transaction verification systems could become part of future POS terminals, especially for high-value or international restaurant transactions.
Competitive Landscape
The global restaurant POS terminal market is moderately fragmented, with a mix of established players and disruptive startups offering specialized solutions.
Key players include:
Square, Inc.
Toast, Inc.
Lightspeed POS Inc.
NCR Corporation
Revel Systems
Oracle Corporation (Micros)
TouchBistro Inc.
Clover Network, Inc.
ShopKeep (acquired by Lightspeed)
Panasonic Corporation
These companies are investing heavily in cloud platforms, AI-powered features, mobile apps, and third-party integrations to maintain a competitive edge.
Future Outlook
The future of the Restaurant POS Terminal Market looks bright, with technological innovation set to redefine how restaurants operate. Between 2024 and 2032, the industry is expected to benefit from:
Continued growth of contactless and mobile payments
Deep integration with delivery apps and logistics systems
Increased adoption of AI and real-time analytics
Expansion in emerging markets with digital-first food businesses
Ongoing migration to cloud-based POS platforms
As consumer preferences evolve and restaurants face increased pressure to deliver personalized, efficient service, POS terminals will remain a critical component of success — acting not just as payment processors but as comprehensive restaurant management hubs.
Conclusion
The Restaurant POS Terminal Market is positioned for significant growth and innovation over the coming decade, as the food service industry adapts to new digital realities and shifting customer expectations. Operators who invest in advanced, integrated POS systems will be best positioned to thrive in an increasingly competitive and experience-driven landscape.
Read Full Report:-https://www.uniprismmarketresearch.com/verticals/banking-financial-services-insurance/restaurant-point-of-sale-terminal
0 notes
Text
Unveiling the Secrets Behind Food Delivery App Development

The demand for Food delivery app development is setting new highs. The market of food delivery is experiencing swift growth and it is anticipated to touch the benchmark of $200 billion before the end of 2025. People find ordering food online much more convenient. As convenience becomes more available, customers are increasingly attracted to on-demand food delivery services. Our team of experts will be sharing the secrets of developing a successful food delivery app development so your business can stand out from the competition.
In order to do well in making food delivery apps relies on matching the features, how users interact with it, and the technology with what your business wants to achieve. This article will talk about the smart choices related to food delivery app development that make customers happier, make things run smoother, and get you more out of what you've invested in technology.
Evolution of Food Delivery Apps
The food delivery apps came into light in late 1990s when internet-based ordering platforms began to gain popularity. Websites such as World Wide Waiter and Grubhub appeared, enabling customers to order food from a small number of restaurants. However, these early innovators were limited to phone orders and didn't have the smooth integration we enjoy in modern apps.
The entry of smart devices brought a significant change in the industry. Apps like Seamless and DoorDash invested in food delivery app development and entered the market in the early 2010s. It integrated functions like GPS-based location services, a wide range of restaurants to choose from, and live order tracking. Customers quickly became addicted to the unmatched convenience they offered.
Platforms within the gig economy such as Uber Eats and Postmates shook up the traditional model by offering delivery services to restaurants that hadn't done so before. Nowadays, advancements continue: subscription plans come with benefits, "ghost kitchens" focus on preparing food specifically for delivery, and there's an exploration into using automation for drone delivery to achieve super quick service.
Understanding User Needs
Knowing what users want is key to a successful food delivery app development. By understanding what your target customers like, need, and struggle with, you can create an app that goes above and beyond. Prioritizing user needs means making ordering easy, navigation simple, and including features that solve real-life issues for hungry users. Neglecting user preferences results in confusing interfaces, features that don't hit the mark, and users switching to apps that better cater to their needs and offer superior convenience. Here is a brief of how you can uncover user needs
Market Research - Information about the people using your app, studying your competition, and keeping an eye on what's happening in the industry highlight general groups of users and what they like.
User Feedback - Feedback from surveys, reviews on app stores, and talking directly with customers show us what's effective, what's lacking, and what features people really enjoy.
Data Analysis - Observing how users move around your app, identifying where they leave during the ordering process, and noting popular search items provide hints on how to improve the design and options available.
Technology Stack and Infrastructure
A strong technology foundation is essential for a thriving food delivery app development. While users interact with a visually appealing interface created using languages such as HTML, CSS, and Javascript, often supported by frameworks like React or Angular, the backend operates silently in the background. This backend, powered by robust languages like Python, Node.js, or Ruby, manages intricate processes and data handling.
Efficient storage solutions like MySQL or MongoDB are employed depending on the data structure. APIs serve as intermediaries, facilitating smooth communication between various components of the app, such as payment gateways or mapping services. Cloud hosting guarantees seamless functioning as your app expands, while third-party integrations like Stripe for payments and Google Maps for location tracking become indispensable features.
Ultimately, selecting the appropriate tech stack involves considering scalability, security, and the expertise of your development team to establish a dependable and effective foundation for your food delivery app.
Prioritizing User Experience Design
Simple, easy-to-use designs, seamless movement through the app, and appealing visuals are key to a great user experience in food delivery apps. When users can quickly locate their preferred restaurant, smoothly complete orders, and enjoy an attractive interface, their satisfaction soars.
On the flip side, a poorly designed app with confusing navigation and unimpressive visuals leads to frustration and a negative experience. However, when design elements blend well together, users are inclined to become loyal customers, feeling confident and pleased when using your app to place their next food order. Here's a breakdown of core UX design principles at play within food delivery apps:
Focus on the User's Journey - User experience (UX) design charts the entire journey a customer goes through, starting from finding the app to completing an order and getting their food. Every interaction point should be assessed to ensure it's user-friendly and doesn't cause any difficulties.
Information Hierarchy - Organize crucial information for users in a logical sequence. Ensure that restaurant choices, menu selections, prices, and estimated delivery times are presented clearly and can be easily scanned.
Error Prevention & Handling - Foresee potential user errors, such as inputting an incorrect address, and incorporate features to prevent them. In case of errors, offer clear notifications and simple methods to correct the issue.
Feedback and Transparency - It's essential to provide clear confirmations for actions performed, like placing an order or adding an item to the cart. Real-time order tracking enhances transparency and fosters trust.
Logistics and Delivery Optimization
Here's a brief overview of how algorithms, route optimization, and real-time tracking collaborate to enhance efficiency and customer satisfaction in food delivery applications:
Algorithms & Route Optimization - Picture a conductor leading a symphony of food delivery. That's the role of algorithms. They sift through extensive data – including order locations, driver availability, traffic flow, and even weather forecasts – to identify the most efficient delivery routes. This reduces travel time for drivers, guaranteeing that food reaches customers quickly and piping hot.
Real-Time Tracking - It's more than just a "Where's my food?" function for customers. It offers valuable real-time information that loops back into the system. If there's a road closure or unexpected traffic, causing a delay, the algorithm can adapt routes instantly, ensuring the best delivery times and updating customers about any alterations.
Combined Impact - This effective combination of algorithms, route optimization, and live tracking forms a mutually beneficial scenario. Deliveries become quicker and more effective for your business, resulting in contented customers who receive their meals promptly and value the transparency of the operation.
Data Analytics and Personalization
Data analytics offers a valuable perspective on how your food delivery app operates internally. By examining user actions, such as browsing habits, preferences, and order history, you acquire a thorough comprehension of your customers' interests. This valuable information serves as the foundation for tailored marketing initiatives.
You have the ability to send customized promotions and deals, addressing individual tastes and preferences. Moreover, insights derived from data enable you to personalize the entire app encounter. This could entail recommending restaurants based on previous orders, providing simple reordering of favorite dishes, or even forecasting future orders for effortless, one-click ease. By comprehending your users at a personal level, you foster brand loyalty that distinguishes your app from competitors.
Security and Privacy Measures
Food delivery apps manage delicate information – including names, addresses, payment details, and even food preferences. Security and privacy must be prioritized from the outset. Strong measures are vital for establishing user confidence and safeguarding your business's image.
Essential security procedures involve robust encryption for data stored and during transmission. Reliable authentication methods with multi-factor options enhance account protection. Consistent vulnerability assessments and proactive updates address potential weaknesses. Crucially, adherence to data protection regulations such as GDPR or CCPA showcases a commitment to user privacy. Additionally, user-friendly tools for managing data preferences contribute positively.
Neglecting to prioritize security exposes you to the threat of data breaches, which can severely damage user trust. Conversely, a securely protected app cultivates loyalty and trust, enticing users to come back, assured that their information is protected.
Challenges and Solutions
Scalability - A sudden increase in orders can overwhelm an app that isn't designed to handle it.
Resolution: Utilize cloud-based infrastructure, providing flexible resources to manage fluctuations in demand. Ensure your database is designed for expansion right from the beginning.
Competitive Landscape: The market is saturated. How do you differentiate yourself?
Solution: Specializing in a niche area (such as late-night delivery or specific cuisines) can be advantageous. Additionally, prioritizing exceptional user experience, and outstanding customer service, and implementing loyalty programs can cultivate a dedicated customer following.
Regulatory Adherence: Compliance with data protection laws and regulations related to food handling or the gig economy is intricate and constantly evolving.
Resolution: It's prudent to invest in legal guidance to guarantee compliance across your app design and business framework. Keep abreast of regulatory modifications as they occur.
Important Note - Success isn't solely reliant on technology; adopting savvy business tactics, staying ahead of regulatory requirements, and committing to delivering an unmatched experience will position your app for sustained prosperity.
Future Trends and Innovations
The food delivery sector is primed for ongoing innovation. Those who welcome emerging technologies, grasp changing consumer trends, and devise inventive methods to lessen their environmental footprint will lead the way in shaping the industry's progression.
Automation & Robotics - The food delivery sector is primed for ongoing innovation. Those who welcome emerging technologies, grasp changing consumer trends, and devise inventive methods to lessen their environmental footprint will lead the way in shaping the industry's progression.
Cloud Kitchens on the Rise - Restaurants focused solely on delivery, without any physical dining areas, and streamlined their operations. Apps might collaborate with or even establish their own ghost kitchens to ensure exclusivity and maintain quality standards.
Subscription Models - Several apps currently provide benefits. Anticipate this to broaden, with offerings customized for various user groups (such as frequent customers, adventurous diners, etc.).
Enhanced Personalization - The future lies in AI-driven recommendations, extending beyond previous purchases to propose choices based on dietary requirements or immediate cravings.
Focus on Sustainability - Customers prioritize environmentally friendly delivery options. Apps might showcase restaurants with sustainable practices, enhance route efficiency to minimize emissions or investigate reusable packaging solutions.
Conclusion
Creating a thriving food delivery app requires more than just a love for food. Grasping the needs of your users, crafting a smooth user experience, and leveraging technology are essential for standing out in this competitive arena. Prioritizing security, maintaining adaptability for expansion, and embracing innovation is crucial for keeping your app pertinent and becoming a go-to choice for hungry users.
Keep in mind, that the food delivery sector is constantly evolving. Emerging technologies, shifts in consumer preferences, and new opportunities will reshape its terrain. This guide from CodeViner provides groundwork, and your continuous dedication to exploration and adaptation will guarantee the success of your app, delivering satisfaction with every order.
1 note
·
View note
Text
8 July 2020: Goats on Zoom. Pubs can track and trace.
Hello, this is the Co-op Digital newsletter - it looks at what's happening in the internet/digital world and how it's relevant to the Co-op, to retail businesses, and most importantly to people, communities and society. Thank you for reading - send ideas and feedback to @rod on Twitter. Please tell a friend about it!

[Image: Goat-2-meeting]
Goats on Zoom and waving goodbye
Why we can't stop waving at the end of video calls - we’re over-performing the social cues of conversation closure because we feel that just closing a browser tab is ambiguous or abrupt. This is just one of many ways that video calling is uncomfortable:
It’s harder to pick up on social cues. You’re looking at your own face. Others are dropping in and out of the call. You’re doing the “Can you hear me? No I think your mic is off” dance. You can’t see if your boss agrees with what you’re saying because the video makes her look like an impressionist painting. All of this makes video calls exhausting. You could say that video calls come with an unexpectedly large “cost of communication”.
If you’d like your calls to be more fun here’s an enterprising Lancashire farm that’ll have a goat participate in your Zoom call. The income has helped them keep staff on and will go toward renewable energy. “The goats are savvy in Microsoft teams, Webex, Blue Jeans, Skype, Google Hangouts, Jitsi, Go To Meeting and pretty much all other video platforms from which you can send a call invite link”, though the newsletter is confident that the goats don’t enjoy Webex.
#startupidea: if you’d pay £6 to have a goat join your video call for a laugh, what would you pay to have an owl join it so you didn’t need to be in the meeting at all? More, surely! (In Silicon Valley a brief goat-2-meeting costs $100!)
Also: Fujitsu announces permanent work-from-home plan - "unprecedented flexibility" to 80,000 workers in Japan. Work is going remote-friendly if not quite remote-first.
If you miss the office, you can put its soothing tones on in the background: I miss the office.
Pubs will track and trace
Pubs are reopening in England. Fancy a pint? You might need an app for that. UKGov suggested that pubs should record personal details of customers to help track and trace efforts if there were any outbreaks. But there are obvious concerns around privacy and public health: you’d guess that record keeping, data accuracy and access to data are all going to be difficult or uncertain in a pub.
On the other hand, pubs seem to be doing a pretty good job running an informal track and trace service!: “Lighthouse had contacted about 100 customers before NHS Test and Trace had been in touch with management”.
Primark down 75%: retail winners and losers
The virus lockdown has had uneven effects on retail. Obviously online has done well. Also “essential” sectors like supermarkets, though there increased sales offline have been counter balanced by increased virus costs (staffing, social distancing measures etc). And of course many businesses have raced to add online shopping: 85,000 businesses launch online shops as B2C and B2B ecommerce surge in lockdown.
Those who’ve struggled are “non-essential” sectors and those that rely on customer proximity: pubs, cafes, offline-only retail etc. Primark sales are down 75% in most recent quarter - this is the lockdown effect because Primark doesn’t sell online.
Ghost kitchens
“Ghost kitchens” have been around for a few years - these are restaurants with no seating: a smaller building in a cheaper location prepares the same number of meals, all for delivery. In 2018 Deliveroo was setting up delivery-only kitchens for restaurants:
A “ghost restaurant” is one that makes food solely for delivery customers - you can’t go to one and get a table. That’s the US term - in the UK they’re called “dark kitchens”. Deliveroo Editions is dark-kitchens-as-a-service: Deliveroo provides them to existing food brands. For larger restaurant operators, dark kitchens can load balance the peak times. For small ones, the kitchens can test demand in a new location without the capital investment in property: “The property requirement is data driven. Deliveroo know exactly where their customers are, the amount they spend, the frequency at which they order and the types of cuisine that are most popular in an area. From the fledgling operator’s perspective, this can lower the risk of venturing into uncharted territories and drive sales up as much as 400%.”
What if this idea was expanded out into high-street-as-a-service? Could you have a high street without a retail shopfront? Or a shopfront with no stock? The internet and logistics have made it possible to separate the point of product discovery from the purchase from the inventory from the delivery/handover. And these points can be recombined in many different ways. IKEA’s warehouse, Argos’s front/back of store, Deliveroo, Amazon’s many forms - all possible variations.
And in the virustimes of 2020, the ghost kitchens are doing well. They now look like "Ghost Kitchen Colonies" (or “commissary” kitchens): one site that hosts the kitchens for many restaurants.
“This means sharing ingredients, equipment, and cooking staff to supply multiple restaurant brands. For customers, this provides the opportunity to order different types of food from the same address, and saves companies on overheads and wasting resources.”
Uber: grocery delivery
Before the virus Uber was primarily a taxi company: it used cars to move people around. Once lockdowns started, the people weren’t moving, but the cars still could. So its Uber Eats cars-that-move-meals business overtook the cars-that-move-people business. In June it missed out on buying rival meal delivery platform Grubhub (Just Eat made a last minute order which won that one), so this week Uber has just bought US meal delivery service Postmates, and will run it alongside its existing Uber Eats.
But if you have an on-demand network of cars you can move other things with it too. Uber also says it will be starting grocery delivery in Canada, Latin America and some US cities soon. It bought Cornershop (a grocery delivery startup a bit like Instacart) last year. It’s also a step towards the logistics space. The next step might be combining the car with ghost kitchens: on demand food truck delivery kitchens.
Various things
Nicki Sprinz at ustwo: “As a white person with privilege and relative influence, I invite people to hold me to account, and I must stay uncomfortable. We have to do the hard work to ensure we start to dismantle the structural racism we have benefited from for years.”
Climatestrike software licence: “developers can use to prohibit the use of their code by applications or companies that threaten to accelerate climate change through fossil fuel extraction”.
“Diversity & Inclusion at Conferences and Events (DICE) provides certification and guidance to help conferences and events deliver a representative and diverse set of speakers, perspectives, and attendees.”
Goldman Sachs bank designs its own typeface. It has to work with lots of small numbers, so it might be good for spreadsheet fans (hello).
“Find yourself an alternative pint while also supporting local independent pubs and bars” - Neverspoons.
Co-op Digital news and events
The government’s consultation response on violence and abuse toward shop staff and The Co-op’s report on it in Sep 2019: “Our latest research shows that retail crime has reached epidemic proportions, with 115 retail workers physically attacked every day in the UK, with many more verbally abused and threatened. This needs to change. We need the UK Government to urgently protect shop workers and send a clear message that violence and verbal aggression will not be tolerated in shops.”
The Federation House team is running weekly drop-in chats for the community every Wednesday at 10am: Join us here. See our online events. You can also see how The Federation is planning for a safe return to the co-working floor.
Free of charge events:
Andy’s Man Club – Gentleman's Peer to Peer Mental Health Meet Up – Mondays 7pm
Volunteer with Code Your Future – Online Meet Up – 8 July - 6pm
Beginners Guide to Retrofit – Webinar – 8 July – 6pm
Northern Azure User Group – Online Meet Up – 8 July - 6pm
Building Resilience within your teams - Webinar – 9 July – 12.30pm
Accessing Open Data through API’s – Webinar – 14 July – 6.30pm
Python for Beginners – Online Workshop – 16 July - 4pm
Paid for events:
Invisible Cities - Online Tours of Manchester or Edinburgh – Various Dates & Times
Thank you for reading
Thank you, beloved readers and contributors. Please continue to send ideas, questions, corrections, improvements, etc to @rod on Twitter. If you have enjoyed reading, please tell a friend! If you want to find out more about Co-op Digital, follow us @CoopDigital on Twitter and read the Co-op Digital Blog. Previous newsletters.
0 notes
Text
Explore the world of Uber and Eats API's data sets, their types, and the diverse applications they offer. Discover how these data sets empower developers and businesses to enhance their understanding and leverage valuable insights.
For More Information:-
https://www.iwebscraping.com/ubereats-decoding-api-data-sets-unleashing-potential-and-applications.php
0 notes
Photo

Real-time Location Tracking with React Native and PubNub
With ever-increasing usage of mobile apps, geolocation and tracking functionality can be found in a majority of apps. Real-time geolocation tracking plays an important role in many on-demand services, such as these:
taxi services like Uber, Lyft or Ola
food Delivery services like Uber Eats, Foodpanda or Zomato
monitoring fleets of drones
In this guide, we’re going use React Native to create a real-time location tracking apps. We’ll build two React Native apps. One will act as a tracking app (called “Tracking app”) and the other will be the one that’s tracked (“Trackee app”).
Here’s what the final output for this tutorial will look like:
[video width="640" height="480" mp4="https://dab1nmslvvntp.cloudfront.net/wp-content/uploads/2019/09/1569381508tracking.mp4"][/video]
Prerequisites
This tutorial requires a basic knowledge of React Native. To set up your development machine, follow the official guide here.
Apart from React Native, we’ll also be using PubNub, a third-party service that provides real-time data transfer and updates. We’ll use this service to update the user coordinates in real time.
Register for a free PubNub account here.
Since we’ll be using Google Maps on Android, we’ll also need a Google Maps API key, which you can obtain on the Google Maps Get API key page.
To make sure we’re on the same page, these are the versions used in this tutorial:
Node v10.15.0
npm 6.4.1
yarn 1.16.0
react-native 0.59.9
react-native-maps 0.24.2
pubnub-react 1.2.0
Getting Started
If you want to have a look at the source code of our Tracker and Trackee apps right away, here are their GitHub links:
Trackee App repo
Tracker App repo
Let’s start with the Trackee app first.
Trackee App
To create a new project using react-native-cli, type this in the terminal:
$ react-native init trackeeApp
$ cd trackeeApp
Now let’s get to the fun part — the coding.
Add React Native Maps
Since we’ll be using Maps in our app, we’ll need a library for this. We’ll use react-native-maps.
Install react-native-maps by following the installation instructions here.
Add PubNub
Apart from maps, we’ll also install the PubNub React SDK to transfer our data in real time:
$ yarn add pubnub-react
After that, you can now run the app:
$ react-native run-ios $ react-native run-android
You should see something like this on your simulator/emulator:
The post Real-time Location Tracking with React Native and PubNub appeared first on SitePoint.
by Vikrant Negi via SitePoint https://ift.tt/2lMlwx3
0 notes
Text
Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless?
We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? Drew McLellan talks to Chris Coyier to find out.
Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.
Show Notes
Weekly Update
Transcript
Drew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?
Chris Coyier: Hey, I’m smashing.
Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.
Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?
Drew: Oh, Plus, yeah.
Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…
Drew: I think so, yeah-
Chris: Yeah.
Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.
Chris: Mm (affirmative).
Drew: This is something you’ve been learning sort of more about for a little while. Is that right?
Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.
Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.
Chris: Yeah, yeah. That’s it.
Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?
Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.
Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…
Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have Lambda@Edge, which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.
Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.
Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.
Chris: Yeah. Sure.
Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.
Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.
Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-
Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.
Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.
Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?
Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?
Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.
Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”
Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.
Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.
Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?
Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.
Drew: Mm (affirmative).
Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.
Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…
Chris: Yeah.
Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.
Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?
Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-
Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.
Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?
Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.
Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.
Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.
Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.
Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.
Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.
Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.
Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.
Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?
Chris: Mm (affirmative).
Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?
Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?
Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that –
Chris: It’s nice.
Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.
Chris: Mm (affirmative).
Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?
Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.
Drew: Are there ways to mitigate that, that are particularly –
Chris: I don’t know.
Drew: … suited to this sort of approach, that you’ve come across?
Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.
Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.
Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.
Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.
Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.
Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”
Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?
Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.
Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.
Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.
Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.
Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?
Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.
Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.
Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time –
Drew: Absolutely.
Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO Company, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO Company. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.
Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.
Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.
Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?
Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-
Drew: Yeah, that sounds smart. Yep.
Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.
Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.
Chris: Easily.
Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?
Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.
Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.
Drew: Yeah, I think that’s the way that Netlify manage it.
Chris: They all do, you know?
Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.
Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”
Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?
Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.
Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.
Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.
Drew: Yeah.
Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?
Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.
Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.
Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?
Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at serverless.css-tricks.com and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.
Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…
Chris: That’s all it is, pretty much, is lists of technology. Yeah.
Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.
Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.
Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”
Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?
Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.
Drew: So billing by the second. Yeah.
Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.
Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.
Drew: Really?
Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.
Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.
Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.
Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?
Drew: That’s fascinating-
Chris: Pretty cool.
Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?
Chris: Smashingpodcast.com. I hope that’s the real URL.
(il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/smashing-podcast-episode-22-with-chris-coyier-what-is-serverless/ source https://scpie.tumblr.com/post/626242138958610432
0 notes
Text
Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless?
We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? Drew McLellan talks to Chris Coyier to find out.
Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.
Show Notes
Weekly Update
Transcript
Drew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?
Chris Coyier: Hey, I’m smashing.
Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.
Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?
Drew: Oh, Plus, yeah.
Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…
Drew: I think so, yeah-
Chris: Yeah.
Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.
Chris: Mm (affirmative).
Drew: This is something you’ve been learning sort of more about for a little while. Is that right?
Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.
Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.
Chris: Yeah, yeah. That’s it.
Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?
Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.
Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…
Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have Lambda@Edge, which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.
Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.
Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.
Chris: Yeah. Sure.
Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.
Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.
Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-
Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.
Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.
Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?
Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?
Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.
Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”
Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.
Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.
Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?
Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.
Drew: Mm (affirmative).
Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.
Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…
Chris: Yeah.
Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.
Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?
Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-
Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.
Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?
Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.
Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.
Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.
Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.
Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.
Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.
Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.
Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.
Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?
Chris: Mm (affirmative).
Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?
Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?
Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that –
Chris: It’s nice.
Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.
Chris: Mm (affirmative).
Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?
Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.
Drew: Are there ways to mitigate that, that are particularly –
Chris: I don’t know.
Drew: … suited to this sort of approach, that you’ve come across?
Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.
Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.
Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.
Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.
Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.
Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”
Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?
Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.
Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.
Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.
Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.
Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?
Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.
Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.
Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time –
Drew: Absolutely.
Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO Company, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO Company. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.
Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.
Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.
Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?
Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-
Drew: Yeah, that sounds smart. Yep.
Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.
Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.
Chris: Easily.
Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?
Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.
Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.
Drew: Yeah, I think that’s the way that Netlify manage it.
Chris: They all do, you know?
Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.
Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”
Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?
Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.
Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.
Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.
Drew: Yeah.
Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?
Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.
Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.
Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?
Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at serverless.css-tricks.com and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.
Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…
Chris: That’s all it is, pretty much, is lists of technology. Yeah.
Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.
Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.
Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”
Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?
Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.
Drew: So billing by the second. Yeah.
Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.
Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.
Drew: Really?
Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.
Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.
Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.
Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?
Drew: That’s fascinating-
Chris: Pretty cool.
Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?
Chris: Smashingpodcast.com. I hope that’s the real URL.
(il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/smashing-podcast-episode-22-with-chris-coyier-what-is-serverless/
0 notes
Text
Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless?
We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? Drew McLellan talks to Chris Coyier to find out.
Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.
Show Notes
Weekly Update
Transcript
Drew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?
Chris Coyier: Hey, I’m smashing.
Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.
Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?
Drew: Oh, Plus, yeah.
Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…
Drew: I think so, yeah-
Chris: Yeah.
Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.
Chris: Mm (affirmative).
Drew: This is something you’ve been learning sort of more about for a little while. Is that right?
Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.
Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.
Chris: Yeah, yeah. That’s it.
Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?
Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.
Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…
Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have Lambda@Edge, which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.
Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.
Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.
Chris: Yeah. Sure.
Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.
Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.
Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-
Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.
Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.
Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?
Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?
Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.
Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”
Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.
Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.
Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?
Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.
Drew: Mm (affirmative).
Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.
Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…
Chris: Yeah.
Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.
Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?
Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-
Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it��s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.
Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?
Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.
Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.
Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.
Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.
Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.
Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.
Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.
Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.
Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?
Chris: Mm (affirmative).
Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?
Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?
Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that –
Chris: It’s nice.
Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.
Chris: Mm (affirmative).
Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?
Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.
Drew: Are there ways to mitigate that, that are particularly –
Chris: I don’t know.
Drew: … suited to this sort of approach, that you’ve come across?
Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.
Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.
Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.
Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.
Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.
Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”
Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?
Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.
Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.
Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.
Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.
Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?
Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.
Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.
Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time –
Drew: Absolutely.
Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO Company, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO Company. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.
Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.
Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.
Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?
Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-
Drew: Yeah, that sounds smart. Yep.
Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.
Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.
Chris: Easily.
Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?
Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.
Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.
Drew: Yeah, I think that’s the way that Netlify manage it.
Chris: They all do, you know?
Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.
Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”
Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?
Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.
Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.
Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.
Drew: Yeah.
Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?
Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.
Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.
Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?
Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at serverless.css-tricks.com and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.
Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…
Chris: That’s all it is, pretty much, is lists of technology. Yeah.
Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.
Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.
Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”
Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?
Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.
Drew: So billing by the second. Yeah.
Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.
Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.
Drew: Really?
Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.
Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.
Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.
Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?
Drew: That’s fascinating-
Chris: Pretty cool.
Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?
Chris: Smashingpodcast.com. I hope that’s the real URL.
(il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/smashing-podcast-episode-22-with-chris-coyier-what-is-serverless/ source https://scpie1.blogspot.com/2020/08/smashing-podcast-episode-22-with-chris.html
0 notes
Text
Context
Pilgrim is a response to our understanding of a real-world problem, four defining pain points and a projected user base. This user base relies on data retrieved from a New Zealand surfing association survey held in 2016, it is estimated that 1 in every 27 New Zealanders surf (“NZ Surfing Survey”, 2016). This equates to roughly 170 thousand New Zealanders who could become potential users of Pilgrim just from surfing alone. Adjacent industries and activities such as Snowsports, Mountain Biking, Tramping and select Music festivals provide growth channels which would see Pilgrims user base greatly expand hence the emphasis on developing of a versatile and transferable platform.
The concept of ridesharing in recent years has drastically changed the way we travel locally, platforms the likes of Uber (Uber, 2018) and Zoomy (Zoomy, 2018) have taken the New Zealand and international market by storm. Uber boasts as of the end of 2017 a presence in 554 cities across 80 countries (Uber blog, 2017), these figures highlight an attitude shift with the worlds urban population becoming more open to organizing rides through flexible mobile platforms. However, the majority of mainstream rideshare platforms don't cater to the diverse needs of Pilgrims user base. This has seen surfers and those partaking in other recreational activities turn to social media as a point of call, in particular, Facebook (Facebook, 2018). A variety of communities exist on Facebook for the purpose of sharing rides across the board from surfers looking for a wave to students looking to split the cost of driving to a summer festival. However, social media platforms like Facebook are not designed to act as rideshare platforms, they have been turned to because of their large user base and popular messaging applications allowing for ease of communication. Facebook does not provide an effective system of organising rides due to the saturation of other material and its wide range of both users and uses. The job of Pilgrim is to harness all these potential users scattered in disparate facebook and online communities, and connect them using one centralised platform.
The rideshare market is one that caters to many different demographics and is populated by many different companies. The section that seems to be most heavily targeted is the daily urban commuter (carpooling). In New Zealand we have companies the likes of Chariot (Chariot Ridesharing, 2016) providing such services, while globally almost every country or major city has something of a similar nature. Companies the likes of BlaBlaCar (BlaBlaCar, 2018), a global ridesharing service that takes a slightly different model to conventional carpooling apps focusing on long distance intercity travel. The key difference here being drivers get paid per trip rather than doing it out of pure good will. Locally, Co seats ( https://www.coseats.co.nz) provides a service that is targeted at travelers looking to travel intercity, However destinations are limited to NZ’s major towns and centres and it lacks a standardised payment scheme leaving passengers to make set offerings from which drivers can pick an choose from. Then we have existing action sport ridesharing platforms, most notable in New Zealand is “Snowpool” (Snowpool Limited, 2015) an online and mobile platform which, similar to Pilgrim, provides the opportunity for drivers to list available seats and for riders to find a ride to popular ski fields in New Zealand. However, the platform being founded in 2005 is an outdated service with a clumsy user interface. It fails to make use of advances in technology and additional API’s that make modern day ridesharing so simple and accessible. Furthermore, the platform currently only services central south island ski fields due to its small user base.
Where Pilgrim sets itself apart from existing platforms in this space in the diversity of the needs of its user. It is often a time-consuming process finding someone heading to the same location for the same reasons and settling on a price. By targeting a tight-knit and established community we hope to alleviate the above problems, when the conditions for a particular activity are good the majority of that community wants to be in the same place, a standardised estimate of price will negate the need for time consuming and potentially off putting negotiations.
Our approach to the build was heavily influenced by the framework of human centered design (Giacomin, 2014), the process of designing solutions to problems with the human perspective in mind. A look back at the history of user experience design and existing approaches provided us with clear direction. The author of LEAN UX Jeff Gothelf talks about reaching a stage of consensus between designer and user “Shared understanding is the currency of UX” (Gothelf, 2013). The creation of in depth user personas and empathy maps which provide a detailed breakdown of the exact pain points we had to solve were an attempt to reach such a level of understanding as the app itself and the incumbent User experience should be a direct response to these pain points. In terms of testing and implementing planned features, we looked again toward the lean model for guidance. “The practice of bringing the true nature of design work to light faster, in a collaborative cross functional way with less emphasis on deliverables and a greater focus on the actual experience being designed” (Gothelf, 2013) The core objective being to obtain feedback as early as possible so that it can be used to make quick decisions. The Agile software development model (Beck, Beedle, Van Bennekum, Cockburn, Cunningham, Fowler & Kern, 2001) works in tandem by outputting work in rapid cycles, these shadow the lean workflow UX to ensure feedback and user data generated can be used in each iteration. The publication provided details the creation of Pilgrims user experience from the ground up. This ensures any decision in the design process would be a logical step based on data from the previous iterative cycle.
Informed by innovative UX workflows, practical application began with studying existing applications and platforms in an effort to extract, and implement UX features that enhance trust, community interaction and ease of communication. Platforms that facilitated connections involving logistics such as online marketplaces, auction sites often connect user but don’t go as far as handling the finer details. Take for example Airbnb (Airbnb, 2018) Before a transaction is complete users must submit a message of intent which in turn generates an open dialogue allowing a connection to take place seamlessly. New Zealand based charity Sustainable Coastlines rallies a community around public data that shows the impact of a collective rather than the individual (Howitt, 2018). Popular social networks such as Facebook create a sense of trust and security through layers of transparency allowing users to see mutual connections and degrees of separation between other users ( Facebook, 2018) which serve to eliminate any anonymity in their user base, something the Pilgrim platform will rely upon greatly.
As a brand, we are promoting a sustainable lifestyle. Therefore it makes sense to employ a marketing/branding strategy that is reflective of this in sustainable storytelling. An example of this can be taken from the partnership eBay and Patagonia clothing formed named “Common threads”. “Patagonia encourages interest in used clothing in part through its Worn Wear blog, where readers share stories about the adventures they have had with their old Patagonia gear. One year after the initiative was launched, eBay sold 25 percent more used Patagonia products.” (Schwartz, 2013) While Pilgrim operates within a different space to Patagonia, parallels can be drawn between core values. We see our biggest marketing and distribution tool to be the stories Pilgrim facilitates.
References:
Wilkinson, J. (2016, November 15). Surfing survey biggest of its kind in New Zealand. Stuff. Retrieved from: https://www.stuff.co.nz/national/86479402/surfing-survey-biggest-of-its-kind-in-new-zealand?rm=m
Uber Technologies, Inc. (2018). Uber. (Version 3.322.10004). [Mobile Application software].
Zoomy, Ltd. (2018). Zoomy. (Version 5.0.3). [Mobile application software].
Uber Blog. (2017, December 12). Kiwi trends: 2017 according to Uber and Uber Eats [Blog Post]. Retrieved from: https://www.uber.com/en-NZ/blog/yearwithubernz/
Facebook, Inc.(2018). Facebook. [Web application Software]. [Mobile Application Software].
Chariot Ridesharing. (2016). Share the ride share the rewards. Retrieved from: https://www.getchariot.com
BlaBlaCar. (2018). Click. Go. Together. Retrieved from: https://www.blablacar.com
Coseats.(n.a) Rideshare in New Zealand. Retrieved from: https://www.coseats.co.nz
Snowpool Limited. (2015, April 23). Snow pool. (Version 3.1). [Mobile Application Software].
Giacomin, J. (2014). What is human centred design?. The Design Journal, 17(4), 606-623.
Gothelf, J. (2013). Lean UX: Applying lean principles to improve user experience. " O'Reilly Media, Inc."..
Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., ... & Kern, J. (2001). Manifesto for agile software development.
.Airbnb, Inc. (2018). Airbnb. (Version 18.3). [Mobile Application Software].
Howitt, C. (2018, sep 28) Sustainable Coastlines. Future Slam. Auckland War Memorial Museum.
Schwartz, S. (2013, November 13). Sustainability Storytelling: Creating a Narrative that Matters. BSR Conference. Retrieved from: https://www.bsr.org/our-insights/blog-view/sustainability-storytelling-creating-a-narrative-that-matters
0 notes
Text
UK Uber drivers are taking its algorithm to court
A group of UK Uber drivers has launched a legal challenge against the company’s subsidiary in the Netherlands. The complaints relate to access to personal data and algorithmic accountability.
Uber drivers and Uber Eats couriers are being invited to join the challenge which targets Uber’s use of profiling and data-fuelled algorithms to manage gig workers in Europe. Platform workers involved in the case are also seeking to exercise a broader suite of data access rights baked into EU data protection law.
It looks like a fascinating test of how far existing legal protections wrap around automated decisions at a time when regional lawmakers are busy drawing up a risk-based framework for regulating applications of artificial intelligence.
Many uses of AI technology look set to remain subject only to protections baked into the existing General Data Protection Regulation (GDPR). So determining how far existing protections extend in the context of modern data-driven platforms is important.
The European Commission is also working on rebooting liability rules for platforms, with a proposal for a Digital Services Act due by the year’s end. As part of that work it’s actively consulting on related issues such as data portability and platform worker rights — so the case looks very timely.
Via the lawsuit, which has been filed in Amsterdam’s district court today, the group of Uber drivers from London, Birmingham, Nottingham and Glasgow will argue the tech giant is failing to comply with the GDPR and will ask the court to order immediate compliance — urging it be fined €10,000 for each day it fails to comply.
They will also ask the court to order Uber to comply with a request to enable them to port personal data held in the platform to a data trust they want to establish, administered by a union.
For its part Uber UK said it works hard to comply with data access requests, further claiming it provides explanations when it’s unable to provide data.
Data rights to crack open an AI blackbox?
The GDPR gives EU citizens data access rights over personal information held on them, including a right to obtain a copy of data they have provided so that it can be reused elsewhere.
The regulation also provides some additional access rights for individuals who are subject to wholly automated decision making processes where there is a substantial legal or similar impact — which looks relevant here because Uber’s algorithms essentially determine the earning potential of a driver or courier based on how the platforms assigns (or withholds) jobs from the available pool.
As we wrote two years ago, Article 22 of the GDPR offers a potential route to put a check on the power of AI blackboxes to determine the trajectory of humankind — because it requires that data controllers provide some information about the logic of the processing to affected individuals. Although it’s unclear how much detail they have to give, hence the suit looks set to test the boundaries of Article 22, as well as making reference to more general transparency and data access rights baked into the regulation.
James Farrar, an Uber driver who is supporting the action — and who was also one of the lead claimants in a landmark UK tribunal action over Uber driver employment rights (which is, in related news, due to reach the UK Supreme Court tomorrow, as Uber has continued appealing the 2016 ruling) — confirmed the latest challenge is “full spectrum” in the GDPR rights regard.
The drivers made subject access requests to Uber last year, asking the company for detailed data about how its algorithm profiles and performance manages them. “Multiple drivers have been provided access to little or no data despite making a comprehensive request and providing clear detail on the data requested,” they write in a press release today.
Farrar confirmed that Uber provided him with some data last year, after what he called “multiple and continuous requests”, but he flagged multiple gaps in the information — such as GPS data only being provided for a month out of two years’ of work; no information on the trip rating assigned to him by passengers; and no information on his profile nor the tags assigned to it.
“I know Uber maintain a profile on me but they have never revealed it,” he told TechCrunch, adding that the same is true of performance tags.
“Under GDPR Uber must explain the logic of processing, it never really has explained management algorithms and how they work to drivers. Uber has never explained to me how they process the electronic performance tags attached to my profile for instance.
“Many drivers have been deactivated with bogus claims of ‘fraudulent use’ being detected by Uber systems. This is another area of transparency required by law but which Uber does not uphold.”
The legal challenge is being supported by the App Drivers & Couriers Union (ADCU) which says it will argue Uber drivers are subject to performance monitoring at work.
It also says it will present evidence of how Uber has attached performance related electronic tags to driver profiles with categories including: Late arrival/missed ETAs; Cancelled on rider; Attitude; Inappropriate behaviour.
“This runs contrary to Uber’s insistence in many employment misclassification legal challenges across multiple jurisdictions worldwide that drivers are self-employed and not subject to management control,” the drivers further note in their press release.
Commenting in a statement, their attorney, Anton Ekker of Ekker Advocatuur, added: “With Uber BV based in the Netherlands as operator of the Uber platform, the Dutch courts now have an important role to play in ensuring Uber’s compliance with the GDPR. This is a landmark case in the gig economy with workers asserting their digital rights for the purposes of advancing their worker rights.”
The legal action is being further supported by the International Alliance of App-based Transport (IAATW) workers in what the ADCU dubs an “unprecedented international collaboration”.
Reached for comment on the challenge, Uber emailed us the following statement:
Our privacy team works hard to provide any requested personal data that individuals are entitled to. We will give explanations when we cannot provide certain data, such as when it doesn’t exist or disclosing it would infringe on the rights of another person under GDPR. Under the law, individuals have the right to escalate their concerns by contacting Uber’s Data Protection Officer or their national data protection authority for additional review.
The company also told us it responded to the drivers’ subject access requests last year, saying it had not received any further correspondence since.
It added that it’s waiting to see the substance of the claims in court.
The unions backing the case are pushing for Uber to hand over driver data to a trust they want to administer.
Farrar’s not-for-profit, Worker Info Exchange (WIE), wants to establish a data trust for drivers for the purposes of collective bargaining.
“Our union wants to establish a data trust but we are blocked in doing so long as Uber do not disclose in a consistent way and not obstruct the process. API would be best,” he said on that, adding: “But the big issue here is that 99.99% of drivers are fobbed off with little or no proper access to data or explanation of algorithm.”
In a note about WIE on the drivers’ attorney’s website the law firm says other Uber drivers can participate by providing their permission for the not-for-profit to put in a data request on their behalf, writing:
Worker Info Exchange aims to tilt the balance away from big platforms in favour of the people who make these companies so successful every day – the workers.
Uber drivers can participate by giving Worker Info Exchange their mandate to send a GDPR-request on their behalf.
The drivers have also launched a Crowdjustice campaign to help raise £30,000 to fund the case.
Discussing the legal challenge and its implications for Uber, Newcastle University law professor Lilian Edwards suggested the tech giant will have to show it has “suitable safeguards” in place around its algorithm, assuming the challenge focuses on Article 22.
Wow. This could be historic: the first art 22 case to really crack the veil of algorithmic black box secrecy and givevpowed back to dstified platform workers. Go @jamesfarrar who drove this ( sic) from the start!! #uber #a22 https://t.co/DEoX1bdCGY
— Lilian Edwards (@lilianedwards) July 20, 2020
“Article 22 normally gives you the right to demand that a decision made in a solely automated way — such as the Uber algorithm — should either not be made or made by a human. In this case Uber might claim however, with some success, that the algorithm was necessary for the Uber context with the driver,” she told us.
“However that doesn’t clear their path. They still have to provide ‘suitable safeguards’ — the biggest of which is the much-discussed right to an explanation of how the algorithm works. But noone knows how that might operate.
“Would a general statement of roughly how the algorithm operates suffice? What a worker would want instead is to know specifically how it made decisions based on his data — and maybe how it discriminated against him or disfavoured him. Uber may argue that’s simply impossible for them to do. They might also say it reveals too much about their internal trade secrets. But it’s still terrific to finally have a post GDPR case exploring these issues.”
In its guidance on Article 22 requirements on its website, the UK’s data watchdog, the ICO, specifies that data controllers “must provide meaningful information about the logic involved in the decision-making process, as well as the significance and the envisaged consequences for the individual”.
It also notes Article 22 requires that individuals who are subject to automated decisions must be able to obtain human review of the outcome if they ask. The law also allows them to challenge algorithmic decisions. While data controllers using automation in this way must take steps to prevent bias and discrimination.
from RSSMix.com Mix ID 8204425 https://ift.tt/2OIoj5e via IFTTT
0 notes
Text
Dreamforce 2017 Fantasy League: How to make the most of your virtual attendance
“Anyone who stops learning is old, whether at twenty or eighty.”
— Henry Ford
Whatever levels we reach in life, learning, getting better never stops! That’s what makes events like Dreamforce stand out. Each year, this event has opened newer avenues for discovering possibilities. From an API developer of a Fortune 500 to a business development manager of a startup, Dreamforce offers a learning spectrum that’s incomparable. This year, the talk of the force is tech innovations for customer success. Simply put, it’s the Tomorrowland of lead generation and philanthropy!
For those who’ve already registered or bagged their badges, and are all set to go, wish you all a Happy Dreamforce 2017! But, hey, if you’re like me and can’t make it in person, but still want to make the most of the Dreamforce magic, then join me in drawing up our Dreamforce Sessions Fantasy league. Such is the scope of the event that it’s not just the attendees - people who want to follow Dreamforce 2017 from a distance need a plan of action too.
Dreamojis & the social vibe tribe
With the digital-social world, omnipresence is a possibility through smartphones. Salesforce has introduced Dreamoji stickers and keyboard. A rather fun way to keep track of what’s happening at Dreamforce and make connections. While it’ll help the attendees, you’re not far from the vibe. It allows you to add calendars to mark all upcoming Salesforce events. Also, if you’re around San Francisco you can locate leads and meet them. All that’s sans registration!
Whatever be the size of your screens, you need to figure out which sessions you want to keep a close tab on, identify the right streams on Twitter, YouTube, or LinkedIn, and most importantly, mark their timings. A highlight of Dreamforce this year is the return of Dreamtalks – small, up-close, and personal sessions discussing disruptions in technology. You may need to wait for the first of the attendees to start tweeting on that though.
Dreamforce 2017: The fantasy league
But here’s how we cannot go and still make it fun! The Dreamforce 2017 ‘fantasy league’ in terms of speaker sessions is as fun as being there. Even if you’re in a tube station in Seoul, this is how San Francisco will come alive on your screen.
Ashton Kutcher on Technology Innovation as a Force for Good
When: Monday, November 6, 13:30 to 14:15 hrs
Where (if you’re there): Yerba Buena Center for the Arts, Dreamtalk Stage - Yerba Buena Theater
From cult blockbusters like Dude, Where’s My Car? and unparalleled romance in A Lot Like Love to investments in Airbnb and Uber, this tech-synced silver screen name needs no introduction. While this venture capitalist has gone places with a vision for innovative tech, his keynote session can intrigue you on how tech can be used for the greater good. That holds B2B organizations in good stead for customer success as everyone wants to do business with a tech-oriented ‘giver’. To catch this, watch out for Ashton’s personal and Dreamforce space on all social channels.
We Are All Trailblazers: Customer Success in the Fourth Industrial Revolution
This could well be termed (not certainly, if you like it that way) the cynosure of it all at Dreamforce 2017. One-time-purchases will soon be out of the window in Industry 4.0 with customer journey, experience, and lifecycle coming to the fore for all marketing and sales teams to succeed.
Stay tuned to this one as this is the speaker lineup keynote dreams are made of:
Marc Benioff, CEO & Chairman, Salesforce
Sylvia Acevedo, CEO, Girls Scouts USA
Paul Cheesbrough, CTO, 21st Century Fox
Christina M. Jones, EVP, Technology & Creative Strategy, 21st Century Fox
Kasper Rorsted, CEO, adidas
When: Monday, November 6, 15:00 to 17:00 hrs
Where: Moscone North, Main Keynote Room - North Hall D
ABM, AI, and Influencer Marketing: The New Revenue Stack
Rightly, a lot has been written and spoken about martech stack and its ROI. However, some key elements as martech moves further are certain elements of that stack which play a bigger role in the overall ROI of your organization, big or small, or just started. Three of those premier elements would surely be ABM, AI, and Influencer Marketing. While ABM has transformed marketing and sales boundaries and relationships, Influencer Marketing has been linked to brand recognition and equity. AI has been the talking point in all kinds of spheres from politics to humane sustainability.
In this pivotal theater session at Dreamforce 2017, Kirk Crenshaw, CMO, Traackr, a provider of solutions for influencer marketing and relationships, will be in the spotlight.
When: Monday, November 6, 21:00 to 21:20 hrs
Where: Palace Hotel, Innovation Theater (Sunset Court)
Salesforce Inbox: Increase CRM Engagement & Sales Productivity
The rising challenge for sales teams across the globe is to bridge the data gap that exists between email and CRM to seal the sales success deal and create enhanced, deeper customer relations. For this, empowerment of salespeople by pepping up their inboxes with AI and customer insights becomes essential. This keynote event on Salesforce Inbox can unravel the secrets to increasing sales productivity and engagement. In-depth insights on the best practices and tech adopted by Salesforce can enable sales reps in making next-level sales through their inboxes. It will also help to get hands-on with the used cases of Einstein which was officially launched during Dreamforce 2016.
The speakers are:
Sriram Iyer, Senior Director, Product Management, Salesforce
Robert Lorenzen, Senior Manager, Product Marketing, Salesforce
Brian McTeague, VP, Sales Operations, Cylance
When: Tuesday, November 7, 09:30 to 10:10 hrs
Where: San Francisco Marriott Marquis Hotel, Yerba Buena Salon 8
And these are just four out of the 2700+ sessions! Get app savvy here and grab the likes of Quip so you can follow what you want to via notes and reminders. An advantage, when you’re attending an event of this magnitude via digital is that you can be in more than one place at a time, which when being present physically is unimaginable. And no surprises for guessing – I’m going to be working on a few more lists over the next week. Stay tuned or share yours with me @T_Toon84.
In San Francisco, but not at Mascone!?!
A lot, and I really mean a lot, of the action takes place outside Dreamforce’s premises in San Francisco.
When scheduling networking meetings, meet outside of Moscone. Everyone needs to eat and some of the best meetings during Dreamforce can be conducted over coffee, lunch, dinner or drinks.
- Kristin Treat, Senior Director, Corporate Communications, Nintex
At times, after a long-long day full of gripping sessions, folks need to chill out. You’ll find many Dreamforce attendees wanting to explore San Francisco at popular pubs and bistros or restaurants and can connect on a lighter note to catch the buzz. And, with Lenny Kravitz and Alicia Keys live in concert, you can rest assure that the afterhours or after-party revelry will be in full swing too.
Finally, if you’re still around the Bay Area, on the last day of Dreamforce 2017, a benefit concert to support the wildfires’ victims of North Carolina will have the legendary Metallica perform live along with Dave Matthew and G-Eazy. Another opportunity beckons for you here to ‘Turn the Page’ because when it comes to lead generation or ‘giving back’, ‘Nothing Else Matters’.
Now then, a quick one for those who will be at Dreamforce 2017
Here are some pro-tips from industry experts on what shouldn’t be amiss when you’re at Mascone:
It's key to plan ahead. With over 2,700 sessions, Dreamforce can be overwhelming for first-timers. Use the Agenda Builder to research, bookmark and register for sessions beforehand. Many sessions will fill up quickly. If your session books up, Salesforce leaves 25% of the seats available for walk-ins. Just make sure you arrive early to reserve your seat.
- Ray Grady, President & CCO, CloudCraze
Whether you are attending as a representative of your company, as a free agent or as a sponsor, Dreamforce is the biggest, baddest networking event you will likely attend this year (maybe ever). You should come prepared with something worthwhile to say. You will talk to more people than you can imagine, and the connections you make at Dreamforce have the potential to move your career. What's your personal elevator pitch? Why should someone want to connect with you? How can you make a difference for them? There are so many opportunities for great conversations and connections. Don’t waste any of your time.
- Will Wiegler, CMO, SpringCM
Take advantage of all of the inclusions in your sponsorship and proactively spread the word about any sessions, your booth number, and your Dreamforce plans to your network before the event. Think beyond your booth: there are a ton of off-site events happening in the city at the same time. Register for networking parties and dinners and show up - more business can get done after hours than on the show floor.
- Michele Aymold, Director, Marketing, G2 Crowd
Win & Give: Your #DF17 Quest Awaits! http://bit.ly/2gQblkX http://pic.twitter.com/ygqlEexHpp
— Dreamforce (@Dreamforce) October 25, 2017
And don’t forget- I’d love to hear about your Dreamforce fantasy league too – tell me your Top 5 picks for the week, and keep following my #DF17 Fantasy League series @T_Toon84.
This article was first appeared on MarTech Advisor
0 notes
Text
Explore the world of Uber and Eats API's data sets, their types, and the diverse applications they offer. Discover how these data sets empower developers and businesses to enhance their understanding and leverage valuable insights.
For More Information:-
0 notes
Text
Decoding
Uber and Eats API Data Sets
: Unleashing Their Potential and Applications
In the ever-evolving landscape of the digital age, data has become the lifeblood of innovation. Two giants in the tech-driven transportation and food delivery sectors, Uber and Uber Eats, have amassed vast amounts of valuable data through their platforms. The key to unlocking the true potential of this data lies in understanding and decoding their Application Programming Interfaces (APIs).
The Power of APIs
APIs serve as the bridge that allows different software applications to communicate with each other. For Uber and Uber Eats, APIs provide developers access to a treasure trove of data that includes user behavior, location information, transaction details, and more. By tapping into these APIs, developers can gain insights, create new features, and enhance existing services.
Understanding Uber's API
Uber's API provides access to a variety of data, including ride details, driver information, and real-time ride updates. This wealth of information has given rise to innovative applications and services. For instance, developers can create apps that integrate Uber's ride-sharing capabilities into other platforms or leverage the data for urban planning and traffic management.
In urban planning, the API data can be used to analyze traffic patterns and optimize transportation infrastructure. By understanding the demand for rides in specific areas at different times, city planners can make informed decisions about road development and public transportation.
Unleashing Uber Eats API Potential
Uber Eats, an extension of the Uber ecosystem, also offers a robust API that exposes a variety of data related to food delivery. This includes restaurant information, menu details, order status, and delivery tracking. Developers can use this data to create applications that enhance the user experience or streamline restaurant operations.
One innovative application of the Uber Eats API is in the development of food recommendation systems. By analyzing user preferences, order history, and restaurant ratings, developers can create algorithms that suggest personalized food options, making the dining experience more enjoyable for users.
Applications in Machine Learning
Decoding Uber and Uber Eats API data sets opens up exciting possibilities in the realm of machine learning. By training models on this data, developers can create predictive algorithms that anticipate user behavior, optimize delivery routes, and even predict popular food trends.
For Uber, machine learning can be applied to predict demand in specific areas, allowing the platform to allocate resources more efficiently. This not only improves the overall user experience but also enhances driver satisfaction by reducing idle time.
In the context of Uber Eats, machine learning can be employed to optimize delivery routes, reducing delivery times and increasing customer satisfaction. Predictive models can also help restaurants optimize their menus based on popular choices and customer preferences.
The Future of API-Driven Innovation
As technology continues to advance, the role of APIs in driving innovation becomes increasingly significant. Uber and Uber Eats, with their vast user bases and extensive datasets, are at the forefront of this revolution. Decoding their APIs allows developers to tap into a goldmine of data, unleashing the potential for groundbreaking applications in transportation, food delivery, and beyond.
In conclusion, the decoding of Uber and Uber Eats API data sets is a gateway to a world of possibilities. From optimizing urban transportation to revolutionizing the food delivery experience, developers have the tools they need to create innovative solutions that benefit both users and service providers. As we continue to explore the potential of these APIs, the future holds exciting developments that will shape the way we move and dine in the digital age.
0 notes
Text
Explore the world of Uber and Eats API's data sets, their types, and the diverse applications they offer. Discover how these data sets empower developers and businesses to enhance their understanding and leverage valuable insights.
For More Information:-
https://www.iwebscraping.com/ubereats-decoding-api-data-sets-unleashing-potential-and-applications.php
0 notes