#Protobuf
Explore tagged Tumblr posts
Text
burger.proto
syntax = "proto3"; import "google/protobuf/timestamp.proto"; enum StoreType { UNKNOWN = 0; LARGE_FRANCHISE = 1; SMALL_FRANCHISE = 2; LARGE_CHAIN = 3; SMALL_CHAIN = 4; SOLO = 5; } enum DeliveryType { FORGOT = 0; IN_STORE = 1; DRIVE_THRU = 2; PICK_UP = 3; DELIVERY = 4; } message BurgerMetadata { string store = 1; StoreType store_type = 2; string address = 3; string store_number = 4; google.protobuf.Timestamp date = 5; string burger_name = 6; uint32 patties = 7; uint32 cheese_slices = 8; repeated string cheese_varieties = 9; repeated string patty_materials = 10; DeliveryType delivery_type = 11; } message MeatDescription { string doneness = 1; string warmth = 2; string seasoning = 3; string texture = 4; string toughness = 5; string moisture = 6; string size = 7; } message CheeseDescription { string melt = 1; string flavor = 2; string quantity = 3; } message BunDescription { string moisture = 1; string texture = 2; string flavor = 3; string sog = 4; // different from inherent bun moisture } message BurgerExecution { string packaging_style = 1; string packaging_humidity = 2; string cheese_structure = 3; // is the cheese sitting atop or between the patties where it can melt down onto the burger? or below the patties where it looks weird string cheese_demelt = 4; // if the cheese melted, has it cooled/congealed? string relative_sizing = 5; // do the proportions of meat/cheese/bun make sense string correctness = 6; // is this actually what the menu and/or order specified } message Burger { BurgerMetadata metadata = 1; MeatDescription meat = 2; CheeseDescription cheese = 3; BunDescription bun = 4; BurgerExecution execution = 5; }
the look on the casual burgerer's face when they realize i'm serious
55 notes
·
View notes
Text
Just wrapped up the assignments on the final chapter of the #mlzoomcamp on model deployment in Kubernetes clusters. Got foundational hands-on experience with Tensorflow Serving, gRPC, Protobuf data format, docker compose, kubectl, kind and actual Kubernetes clusters on EKS.
#mlzoomcamp#tensorflow serving#grpc#protobuf#kubectl#kind#Kubernetes#docker-compose#artificial intelligence#machinelearning#Amazon EKS
0 notes
Text
least favorite part of job is how they expect me to write bland pull request descriptions for shit that fucking broke me. "ensures the frobnicator tool preserves the new frobnicator metadata fields instead of burying them under a tree by the light of the full moon," i write. "note that google.protobuf.json_format defaults to exporting fields in camelCase rather than snake_case, but bufbuild's generated load-from-json code refuses to even acknowledge the existence of camelCase fields, ignoring them as an Edwardian society lady might disdain a bootblack. this diff may be only ten lines long but the process of creating it has destroyed my will to live. let's all join hands and walk into the bay together so we never have to think about camel case or protobuf oneofs ever again."
and then i backspace all of it and write something normal
63 notes
·
View notes
Text
I needed documentation on Kotlin's protobuf DSLs and they hit me with a jojo reference.
10 notes
·
View notes
Text
Found out Meshtastic has a JS library that can interface over http so I can skip building the JS bridge just need to connect the JS lib to Pyodide which seems relatively trivial considering I'd have to manually deal with protobufs in the browser otherwise. Just finished wrangling with pnpm for a couple hours on and off so I have the library built.
22 notes
·
View notes
Video
youtube
Boost Data Efficiency: Fetching from Database with gRPC Protobuf in ASP....
2 notes
·
View notes
Quote
全体的にクールなハックですが、protobuf に関する記述の一部が奇妙に感じられます。 > Protobuf の欠陥を悪用して 1 バイトを変更するだけですべての広告を簡単に削除 彼は、protobuf の 1 つのフィールドのタグを意図的に破損しています。 「欠陥」は、Protobuf が認識しないタグ番号を無視することのようです。 しかし、これは欠陥ではなく、Protobuf の核となる設計機能であり、拡張性を可能にするように設計されています。 > Protobuf 応答ペイロードが 1.87 MiB であることに注目してください。 先ほども述べたように、Google では、C++ ソース プロト ファイルを使用せずにデコード、変更、再エンコードするのに計算コストがかかりますが、迅速なリニア スキャンにはまったく手間がかかりません。 1.87MB はそれほど大きくなく、おそらくこれらのメッセージは定期的に送信されるのではなく、時々しか送信されないため、ここで主張されているパフォーマンスの障壁については少し当惑しています。 本文では、Protobuf エンコーディングはデコードにコストがかかるように設計されていると主張しているように見えますが、実際はその逆で、Protobuf はデコードを効率的に行うことを目的としています。 彼は、効率的にするにはソース プロト (スキーマ) ファイルが必要だと言いますが、実際にはそうではありません。UnknownFieldSet に直接デコードするのは簡単です。 あるいは、削除しようとしている 1 つのフィールドだけをカバーする独自の偽の .proto スキーマを作成することもできたかもしれません。 これは、同じバイト シーケンスが偶然に出現した場合に他のデータと誤って一致する可能性がある文字列スキャンのアプローチよりもエラーが発生しにくいでしょう。 > 元のスキーマを使用せずにデコード、編集、再エンコードすると、計算コストが高くなりますが、エンコードが変更されることになります。 これはおそらく、ZigZag エンコーディングが使用されているかどうか、または数値が int32、int64、sint32/64、varint などであるかどうかを検出できず、さらにオブジェクト フィールドの順序が通常非決定的であることが原因であると考えられます。 この問題に関する Protobuf のトリビアをいくつか紹介します。 彼はここで誤解していると思います。 重要なのは、Protobuf エンコーダーはフィールドを任意の順序でエンコードできるため、メッセージをデコードしてすぐに再エンコードすると、バイトが異なる可能性があるということです。 ただし、受信者はメッセージを同じように扱う必要があります。 フィールドの順序が変更されても、YouTube アプリがそれを認識する可能性はほとんどありません。 (私は昔、Protobufs に取り組んでいました。)
Profobuf から広告を復号化して削除することで、AppleTV 上の YouTube 広告をブロックする | ハッカーニュース
2 notes
·
View notes
Text
I can’t remember when I wrote these down, but I think it was last year sometime:
Interview Prep Goals To Accomplish
Complete React tutorial
Get to a place where the AlgoExpert Hard questions are easy for me
Notice & understand common algo solving patterns
Clearly describe how the internet works
Complete Advent of Code 2022
Complete 100 Days of Code
Complete AlgoExpert from AlgoExpert
Complete FrontendExpert from AlgoExpert
Complete MLExpert from AlgoExpert
Complete SystemsExpert from AlgoExpert
Building a plan before solving problems and speaking through them as I work
Understand latency, availability, load balancer, long polling, web socket
Understand sync/async flow
Understand pytests better
Understand protobufs better
Passing practice interviews
Passing real interviews
Get multiple offers
Here’s what I’ve actually been able to accomplish:
Got pretty far in React tutorial, learned a good amount, interviewed with it, & dropped it after realizing there’s so much I need to do to get hired as a full stack and solidified my place as a Backend SWE :) for now at least. I know enough React to do projects as I need to, but not enough to pass an interview.
SOME AlgoExpert Hard questions are feasible for me, nowhere near EASY yet, and I don’t HAVE to get there…for any reason
Notice & understand common algo solving patterns
Somewhat understand and can articulate how the internet works
Completed some questions on AlgoExpert from AlgoExpert
Did some FrontendExpert from AlgoExpert & took some of their quizzes
Started SystemsExpert from AlgoExpert & took some of their quizzes
Building a plan before solving problems and speaking through them as I work
Understand latency, availability, load balancers
Understand sync/async flow somewhat
Understand pytests better
Passing practice interviews
Passing real interviews, no offers yet though
Completed 5-week interview prep course
Learned more about APIs
Understand how to implement pagination & searching
Understand Postman, SQLAlchemy, & FastAPI
Can call APIs in a coding interview environment like Coderpad
Here are some things in my life I have accomplished also:
Improved my relationship with my family.
I’m strong as heck physically and have been losing fat and gaining muscle.
I can sit and work 12 hour days. You couldn’t get me to side for more than 3 previously.
I can get through the day without a nap.
I’m more disciplined in every area of my life.
I release people who don’t want to be in my life anymore.
Got admitted to an improv theater after passing their multi-day auditions.
Made a rude guy who disrespected me apologize to my face.
All glory to God.
#Youtube#tech#software engineering#software engineer#check in#black in tech#black in the bay#san francisco#silicon valley#engineer#python#black women in tech#women in tech#startup#layoff#100 days of python#100 days of code#programming#coding#pytest#systems engineer#codeblr
4 notes
·
View notes
Text
I would like to inform OOP that juh has implemented a protobuf protocol for burger reviewing
if there's one thing i appreciate about americans, its that they are DELIGHTED by "american love burger" jokes. they cant get enough of them! much like how they cant get enough of burger……
59K notes
·
View notes
Text
thinking of rewriting my burger schema in protobuf. so you can send burgers over the internet
14 notes
·
View notes
Text
A Practical Guide to Implementing gRPC in Go Applications | Example
A Practical Guide to Implementing gRPC in Go Applications 1. Introduction 1.1 Brief Explanation and Importance gRPC is a modern, high-performance remote procedure call (RPC) framework that enables efficient communication between services. It is built on HTTP/2 and uses protocol buffers (protobuf) for serialization. In Go applications, gRPC has become a popular choice for building scalable and…
0 notes
Text
Modern API Design Patterns for Custom Applications

In today’s fast-paced digital ecosystem, building scalable, maintainable, and efficient applications is more crucial than ever. Custom software developers play a vital role in this transformation by leveraging robust API design patterns to ensure seamless communication between services, apps, and systems. APIs are no longer just bridges between applications—they are core components of modern software architectures, especially for businesses looking to offer highly tailored solutions.
Let’s explore some of the most impactful API design patterns that are shaping how custom applications are being developed in 2025.
1. RESTful API: Still the Foundation
Although not exactly new, RESTful APIs continue to be a cornerstone in modern API architecture. They offer a straightforward approach to resource management using HTTP methods. The stateless nature of REST makes it ideal for scalable, modular application development.
Custom applications—whether web platforms or mobile solutions—rely heavily on REST for simplicity and broad compatibility. By adhering to REST principles, developers ensure that applications remain loosely coupled and easy to maintain or scale.
2. GraphQL: The Flexible Alternative
For applications requiring complex data fetching with minimal overhead, GraphQL has emerged as a powerful alternative to REST. Unlike REST, which requires multiple endpoints for different resources, GraphQL uses a single endpoint and allows clients to request exactly the data they need.
This precision is extremely valuable for frontend-heavy applications, such as SPAs (Single Page Applications) or mobile apps with bandwidth constraints. Custom software developers often use GraphQL to improve performance, reduce latency, and enhance the client experience.
3. gRPC: High-Performance Communication
Google’s gRPC is ideal for internal service-to-service communication. It’s especially useful in microservices architectures where speed and efficiency are critical. gRPC uses Protocol Buffers (Protobuf) for serialization, which is much faster than JSON.
gRPC supports bi-directional streaming and strong typing, which is great for building real-time applications like messaging platforms, live dashboards, or financial trading systems. For developers building custom enterprise systems, gRPC offers both speed and type safety.
4. API Gateway Pattern
As applications become more complex, managing multiple APIs efficiently becomes essential. That’s where the API Gateway pattern shines. Acting as a single entry point for all client requests, an API gateway handles routing, load balancing, authentication, and even caching.
This pattern simplifies client-side logic and secures backend services. For businesses offering multiple services or products through a single app, an API Gateway ensures smooth and secure communication without exposing internal architecture.
5. Backend for Frontend (BFF)
In multi-platform environments—where different interfaces like web, mobile, and IoT devices consume the same data—a Backend for Frontend (BFF) pattern is especially useful. Instead of one backend serving all clients, each client type has a specialized backend that caters to its unique needs.
This design reduces the complexity of frontend logic and improves performance. For example, a mobile app might need a trimmed-down response compared to the desktop web version. BFF allows for tailored responses without burdening a single backend with all variations.
Mobile App Considerations: Performance Meets Budget
Modern APIs also influence project costs, especially in mobile development. Choosing the right pattern can save time, money, and resources. This is where a mobile app cost calculator comes in handy. It helps estimate the cost implications of different architectural choices and API strategies before writing a single line of code.
If you're planning a mobile app and wondering how your API choices will affect the budget, try out a mobile app cost calculator to make data-driven decisions from day one.
Want to tailor your app’s API strategy for better performance and scalability? Book an Appointment with our experts and get personalized guidance for your project.
6. Event-Driven API Pattern
Real-time and asynchronous systems benefit greatly from event-driven APIs. Instead of waiting for client requests, these APIs push updates when events occur. This is useful for applications like notifications systems, stock tickers, or order tracking platforms.
Event-driven architecture typically uses tools like Kafka or WebSockets and is a powerful choice for building responsive, low-latency systems. It also enhances decoupling between services, improving maintainability and fault tolerance.
7. OpenAPI (Swagger) for Standardization
Standardization through tools like OpenAPI (formerly Swagger) helps ensure consistent API design across large teams. It allows developers to document APIs, generate client SDKs, and even auto-generate test cases.
For businesses relying on teams of custom software developers, OpenAPI fosters better collaboration and faster onboarding by providing a single source of truth for how APIs are structured and behave.
Wrapping Up: Building Future-Proof APIs
As application ecosystems continue to evolve, so too must the APIs that support them. Whether you're building mobile apps, enterprise dashboards, or IoT integrations, the right API design pattern can significantly impact your product’s performance, scalability, and maintainability.
From REST to GraphQL, and from API gateways to event-driven architectures, modern patterns provide flexibility to match a wide range of use cases. Investing in good API design not only enhances the developer experience but also delivers better end-user satisfaction.
If you're considering building or upgrading your digital product, now is the time to embrace these modern API design patterns. They’re not just trends—they're foundational to how applications will be built in the future.
Need help navigating these choices? We offer custom software development services tailored to your business needs. Reach out today and let's bring your vision to life.
0 notes
Text
Docs That Write Themselves: Scaling With gRPC and Protobuf
http://securitytc.com/TKF52t
0 notes
Text
Subverting Google Authenticator
Moral status: ecru hat.
Goal: Remove a critical dependency on the founder's phone
Say you have a lovely startup -- @startup.com, seed round, such unicorn -- where you've acquired a bunch of customers and more every day. They want you(r business) to act as a specific entity within their business, so they each give you access to a semishared google account: [email protected], [email protected], [email protected] . Maybe it's just you, maybe a bunch of their employees use it too as a shared entity.
Mandatory Multifactor Auth is here.
Your business now has an existential dependency on your CTO's cellphone & installed authenticator app. Yes, you can (, should, did) extract rescue OTPs. Yes, you always can go shamefacedly to your customers and beg them to beg IT to restore service.
But the phone is going to fall in a lake. It has so much criticality to your business, the dark side is gathering. You can hear it calling now: "bus number of 1", it croaks. Wouldn't it be nice if you could manage the account the customer gave you in a way consonant with its actual delegation of privileges?

from Imgflip Meme Generator
You can.
The hack: Extract the secrets from authenticator, put them in a safe location, and access them (more) programmatically
Environment: M1/Sonoma.
0. Ensure you are not currently under cyberattack
Also, burn some sage, scatter salt, etc. This is about to earn you some bad security karma.
1. Setup:
$ brew install oath-toolkit $ brew install protobuf $ brew install trurl
2. Exfil a secret from authenticator onto your laptop in encoded form
Go into your android "Authenticator" app and select one of your secrets to exfil.
Screenshot the export screen.
Paste the image into a new (, private) keep note.
Load that note on a second screen.
Use your phone's camera app to scan the QR code. Don't click it! Long press and copy it. Go back to the same Keep note. Paste the text there, it will be a URL like otpauth-migration://offline?data=C...%3D . This is the secret, but it's wrapped up in layers of encoding.
3. Dump the pb to extract the actual innermost secret
$ echo "otpauth-migration://offline?data=C...%3D" | trurl -f - --get '{query:data}' > /tmp/otpauthmigration.txt
This is the protobuf encoded in base64 (un-url decoded tho'). It turns out they've changed formats at some point, because there exists code & documentation on the format, but it's wrong. We want path 1.1 (a repeated secret descriptor's field 1, its actual raw secret).
$ printf $(cat /tmp/otpauthmigration.txt | base64 -d | protoc --decode_raw | sed -rn 's/ 1: "(.*)"/\1/p') | base32
This is the actual payload of the single secret you exported in step 2 encoded in base32 right there in your terminal history. (funny edit: tumblr elided the backticks from that command... Fixed now!)
Stick this in 1password, it's the thing you need going forward. Call it $SECRET.
From now on, you can
$ oathtool -v --totp -b $SECRET
and get the same result as the app, from any device (check!)
Closing remarks
Don't do this.
If you were to do this, you could no longer repudiate actions the user's account took. Is your security good enough that you can afford to smudge the actor model around like this? Isn't it easier to just pin your hopes and dreams to the survival of a single hardware device, or reengage with customers whenever anything goes wrong?
0 notes