#Graph API
Explore tagged Tumblr posts
infydeva · 1 year ago
Text
[General Availability] Microsoft Graph Rich Notifications
Microsoft Graph allows apps to subscribe to and receive change notifications for resources through different delivery channels. You can set up subscriptions to include the changed resource data (such as the content of a Microsoft Teams chat message or Microsoft Teams presence information) in change notifications. Change notifications that include the resource change data are called Rich…
Tumblr media
View On WordPress
0 notes
techbloggingfool · 2 months ago
Text
Optimizing Email Branding with BIMI
Have you ever noticed that some of the people and companies that email you have different icons? Somehow rather than initials, they are represented by a custom avatar in your message list. Usually, it is their corporate logo. BIMI which is an acronym for Brand Indicators for Message Identification is the protocol being used to achieve the effect. In a nutshell, the protocol requires that you…
0 notes
jcmarchi · 2 months ago
Text
10 Best Free Social Media Plugins for WordPress – Speckyboy
New Post has been published on https://thedigitalinsider.com/10-best-free-social-media-plugins-for-wordpress-speckyboy/
10 Best Free Social Media Plugins for WordPress – Speckyboy
WordPress and social media are a powerful combination. And there are plenty of ways to integrate platforms like Facebook, Instagram, X, or TikTok into your website. You can display social content on your site, allow users to share your content, or add functionality like social logins.
The social platform landscape is also changing. The Fediverse makes it easier to integrate with services like Mastodon or Bluesky. It allows you to publish in multiple places via a unified protocol. The WordPress ecosystem has embraced this model.
Social media’s importance is evident in the number of available plugins. You’ll find options for top platforms that offer a variety of functions. There’s something available for every niche.
With that in mind, let’s look at some top social media plugins for WordPress!
ActivityPub turns your WordPress website into a federated profile. It publishes your content to multiple platforms that support the Fediverse (Mastodon, among others). Followers will then see your blog posts in their feeds. The neat aspect is that your profile name is consistent across federated platforms.
Users follow you on the service of their choice and enjoy your content. It’s a great way to nurture an online community.
This plugin solves a longstanding problem for those posting to multiple social networks. It combines your Facebook, Instagram, Pinterest, and Twitter/X profiles into an attractive grid layout embedded on your site. It creates better awareness of your profiles and ensures users won’t miss anything.
AddToAny is a longtime favorite that makes it easy for visitors to share your content. It works with every social platform you can imagine – there’s also an option to share via email. In addition, you can customize the design and location of the plugin’s sharing UI. This plugin is perfect for taking full control of social sharing.
Open Graph is a protocol for sharing rich content on social platforms like Facebook and Twitter/X. This plugin allows you to customize the Open Graph tags generated by your content.
Thus, content looks how you want when embedded into social media. The plugin works with all WordPress content and integrates with WooCommerce, Yoast SEO, and others.
Posting links to your content across multiple platforms is a pain. Blog2Social makes it easier by automating the process.
The free version works with Bluesky, Facebook, LinkedIn, Threads, and more. There’s also an add-on that works with Twitter/X and a premium version with advanced features. It turns a tedious task into a piece of cake.
WP Social is a suite of tools for adding social media to your WordPress website. First, you’ll find sharing buttons that work with a selection of social platforms. You can customize the layout and services displayed.
A social counter shows how many times your content was shared on a particular service. Lastly, a social login/registration function lets users create an account using one of nine compatible services (Facebook, GitHub, Google, etc.). It may be the only social plugin you need.
You can use Jetpack Social to automate content sharing and increase engagement. Share posts and WooCommerce products automatically and use the included scheduling tools to choose the perfect timing.
The plugin integrates with multiple services and adds helpful tools to the Block Editor. If you have a Jetpack Social plan, you’ll have access to the social image generator feature.
Here’s a plugin that helps spread the word about your WooCommerce products. It adds a custom set of social sharing icons to individual product pages.
Here are options for icon placement, shape, and order. There’s also a handy “copy to clipboard” option so users can grab your product’s URL. The pro version includes a “floating” icon bar, tooltips, and more.
Video creators will love Feeds for YouTube. It’s a tool for easily embedding a YouTube channel into your WordPress website. You can customize the look of your feed and even display multiple feeds.
The plugin will check for new videos hourly (or choose a custom interval), so your site will always show the latest content. The Pro version adds compatibility with live broadcasts via the YouTube Live API.
Use AdTribes to expand the reach of your WooCommerce store. The plugin creates product feeds that integrate with services like Facebook and Google. Filtering options let you choose which products and what product data is included in your feed. Now, customers can view your products right from their social feeds.
Seamlessly Add Social Media to WordPress
Integrating social media into your website has real benefits. For example, embedding a feed adds dynamic content and builds awareness. Even better is that feeds are relatively simple to set up. From there, it’s smooth sailing.
The other big benefit is the opportunity to add functionality. Social logins make it easier for users to register for your site. And product feeds entice users to visit your online shop.
The idea is to reach people where they are. The options above (and the untold number of other social plugins) help you do so. The possibilities are almost limitless.
Social Media WordPress Plugin FAQs
What Are Social Media Plugins for WordPress?
They’re plugins that allow you to connect your WordPress site with various social media platforms. They offer sharing buttons, auto-posting to social media, and allowing you to display your social feeds on your site.
Who Should Use Social Media Plugins?
They’re perfect for website owners who want to improve their social media presence, like bloggers, businesses, and online retailers, or anyone looking to increase their site’s engagement through social media.
Can Social Media Plugins Help in Growing My Site’s Audience?
Yes, by making sharing effortless and displaying your social activity, these plugins can attract more visitors and encourage them to interact with your content.
More Essential Free WordPress Plugins
Related Topics
0 notes
jobsbuster · 1 year ago
Text
0 notes
geekabhi · 1 year ago
Text
Facebook Graph API
Dive into the transformative realm of Instagram management with the power of the Facebook Graph API. Explore how GeekAbhi.com simplifies the landscape, offering insights and strategies to elevate your social media game. Discover the art of connection and streamline your Instagram presence.
0 notes
cyber-sec · 2 months ago
Text
Hackers Use ClickFix Trick to Deploy PowerShell-Based Havoc C2 via SharePoint Sites
Tumblr media
Source: https://thehackernews.com/2025/03/hackers-use-clickfix-trick-to-deploy.html
More info: https://www.fortinet.com/blog/threat-research/havoc-sharepoint-with-microsoft-graph-api-turns-into-fud-c2
6 notes · View notes
eccentric-nucleus · 5 months ago
Text
i've always been kinda interested in making my own music for w/e game thing i make, & between listening to some king gizzard & the lizard wizard music b/c of that come see him video that everybody reblogs all the time and rewatching an old youtube video about the tuning of gamelan music AND finding an old talk about functional music construction AND discovering that there's now a builtin javascript synthesizer library (the 'web audio api') i was like, hey, i could actually try out synthing up some music.
like, i'm not gonna try putting anything real together yet but i could at least read up on the api since it seems incredibly complicated. it really is a conceptual modular synthesizer where components are linked together with signals. wires everywhere
anyway i was like, let's look at how to make a sine wave. "sine" is a preset but beyond that it gets into arbitrary waveforms
Here, we create a PeriodicWave with two values. The first value is the DC offset, which is the value at which the oscillator starts. A value of 0 is good here because it starts the curve at the middle of the [-1.0; 1.0] range. The second and subsequent values are sine and cosine components, similar to the result of a Fourier transform, which converts time domain values to frequency domain values. Here, with createPeriodicWave(), you specify the frequencies, and the browser performs an inverse Fourier transform to get a time domain buffer for the frequency of the oscillator. In this example, we set only one component at full volume (1.0) on the fundamental tone, so we get a sine wave. Bear in mind that the fundamental tone corresponds to the oscillator's frequency (which, by default, is 440 Hz). Therefore, altering the oscillator's frequency effectively shifts the frequency of this periodic wave along with it.
The coefficients of the Fourier transform should be given in ascending order (i.e. (a+bi)ei,(c+di)e2i,(f+gi)e3i etc.) and can be positive or negative. A simple way of manually obtaining such coefficients (though not the best) is to use a graphing calculator.
music from first principles, everybody!!
8 notes · View notes
uinferno · 5 months ago
Text
There's a lot of talk about Spotify "using AI in their latest wrapped," and jokes about how there's no way in hell anyone hand crafted previous years' wrapped, and while not wrong, I do think it is an oversimplification.
There's a difference between fine tuning an algorithm to aggregate a Wrapped vs using a GenAI out of the box. Spotify and YT already had automation running the show, but the difference is that genre categorization was probably originally done using graph theory and nodes—comparing which songs are often listened together, which artists often collaborate, and how people themselves classify the songs vs GenAI which opens ChatGPT or one of its competitors and types "Name a genre/vibe from these list of songs," and that's it.
It's not even an issue of neural networks. I'm certain you can use machine learning that analyzes the mp3/.wav files to classify songs for you, using songs everyone already knows the genres of as training data to calculate the unknowns. Van Halen is Metal, TSwift is Pop, Zedd is EDM, Kendrick Lamar is Hip-Hop and Rap (I listen to none of these people). Even lesser known but still sizeable indie artists with passionate fans can do some of the categorizing for your data.
To me, it's the difference between using a calculator vs. asking chatgpt to help you with math. They're both computers doing all the work, but one is actually tailor made for this purpose and the other is touted as a cyber cure all.
I'm a programmer myself. I've made and ran neural networks before, including LLMs, but I think there is a massive failure of communication regarding this shit. There's already a meme among programmers about how people boast "Their new AI service" and it's just an API to ChatGPT rather than actually taking what technologies run under the hood and deciding how and if it should be applied.
10 notes · View notes
digitaldetoxworld · 17 days ago
Text
Data Analysis: Turning Information into Insight
 In nowadays’s digital age, statistics has come to be a vital asset for businesses, researchers, governments, and people alike. However, raw facts on its personal holds little value till it's far interpreted and understood. This is wherein records evaluation comes into play. Data analysis is the systematic manner of inspecting, cleansing, remodeling, and modeling facts with the objective of coming across beneficial information, drawing conclusions, and helping selection-making.
What Is Data Analysis In Research 
Tumblr media
What is Data Analysis?
At its middle, records analysis includes extracting meaningful insights from datasets. These datasets can variety from small and based spreadsheets to large and unstructured facts lakes. The primary aim is to make sense of data to reply questions, resolve issues, or become aware of traits and styles that are not without delay apparent.
Data evaluation is used in truely every enterprise—from healthcare and finance to marketing and education. It enables groups to make proof-based choices, improve operational efficiency, and advantage aggressive advantages.
Types of Data Analysis
There are several kinds of information evaluation, every serving a completely unique purpose:
1. Descriptive Analysis
Descriptive analysis answers the question: “What happened?” It summarizes raw facts into digestible codecs like averages, probabilities, or counts. For instance, a store might analyze last month’s sales to decide which merchandise achieved satisfactory.
2. Diagnostic Analysis
This form of evaluation explores the reasons behind beyond outcomes. It answers: “Why did it occur?” For example, if a agency sees a surprising drop in internet site visitors, diagnostic evaluation can assist pinpoint whether or not it changed into because of a technical problem, adjustments in search engine marketing rating, or competitor movements.
3. Predictive Analysis
Predictive analysis makes use of historical information to forecast destiny consequences. It solutions: “What is probable to occur?” This includes statistical models and system getting to know algorithms to pick out styles and expect destiny trends, such as customer churn or product demand.
4. Prescriptive Analysis
Prescriptive analysis provides recommendations primarily based on facts. It solutions: “What have to we do?” This is the maximum advanced type of analysis and often combines insights from predictive analysis with optimization and simulation techniques to manual selection-making.
The Data Analysis Process
The technique of information analysis commonly follows those steps:
1. Define the Objective
Before diving into statistics, it’s essential to without a doubt recognize the question or trouble at hand. A well-defined goal guides the entire analysis and ensures that efforts are aligned with the preferred outcome.
2. Collect Data
Data can come from numerous sources which includes databases, surveys, sensors, APIs, or social media. It’s important to make certain that the records is relevant, timely, and of sufficient high-quality.
3. Clean and Prepare Data
Raw information is regularly messy—it may comprise missing values, duplicates, inconsistencies, or mistakes. Data cleansing involves addressing these problems. Preparation may include formatting, normalization, or growing new variables.
Four. Analyze the Data
Tools like Excel, SQL, Python, R, or specialized software consisting of Tableau, Power BI, and SAS are typically used.
5. Interpret Results
Analysis isn't pretty much numbers; it’s about meaning. Interpreting effects involves drawing conclusions, explaining findings, and linking insights lower back to the authentic goal.
6. Communicate Findings
Insights have to be communicated effectively to stakeholders. Visualization tools including charts, graphs, dashboards, and reports play a vital position in telling the story behind the statistics.
7. Make Decisions and Take Action
The last aim of statistics analysis is to tell selections. Whether it’s optimizing a advertising marketing campaign, improving customer support, or refining a product, actionable insights flip data into real-global effects.
Tools and Technologies for Data Analysis
A big selection of gear is available for facts analysis, each suited to distinct tasks and talent levels:
Excel: Great for small datasets and short analysis. Offers capabilities, pivot tables, and charts.
Python: Powerful for complicated facts manipulation and modeling. Popular libraries consist of Pandas, NumPy, Matplotlib, and Scikit-learn.
R: A statistical programming language extensively used for statistical analysis and statistics visualization.
SQL: Essential for querying and handling information saved in relational databases.
Tableau & Power BI: User-friendly enterprise intelligence equipment that flip facts into interactive visualizations and dashboards.
Healthcare: Analyzing affected person statistics to enhance treatment plans, predict outbreaks, and control resources.
Finance: Detecting fraud, coping with threat, and guiding investment techniques.
Retail: Personalizing advertising campaigns, managing inventory, and optimizing pricing.
Sports: Enhancing performance through participant records and game analysis.
Public Policy: Informing choices on schooling, transportation, and financial improvement.
Challenges in Data Analysis
Data Quality: Incomplete, old, or incorrect information can lead to deceptive conclusions.
Data Privacy: Handling sensitive records requires strict adherence to privacy guidelines like GDPR.
Skill Gaps: There's a developing demand for skilled information analysts who can interpret complicated facts sets.
Integration: Combining facts from disparate resources may be technically hard.
Bias and Misinterpretation: Poorly designed analysis can introduce bias or lead to wrong assumptions.
The Future of Data Analysis
As facts keeps to grow exponentially, the sector of facts analysis is evolving rapidly. Emerging developments include:
Artificial Intelligence (AI) & Machine Learning: Automating evaluation and producing predictive fashions at scale.
Real-Time Analytics: Enabling decisions based totally on live data streams for faster reaction.
Data Democratization: Making records handy and understandable to everybody in an business enterprise
2 notes · View notes
uegub · 3 months ago
Text
Why Tableau is Essential in Data Science: Transforming Raw Data into Insights
Tumblr media
Data science is all about turning raw data into valuable insights. But numbers and statistics alone don’t tell the full story—they need to be visualized to make sense. That’s where Tableau comes in.
Tableau is a powerful tool that helps data scientists, analysts, and businesses see and understand data better. It simplifies complex datasets, making them interactive and easy to interpret. But with so many tools available, why is Tableau a must-have for data science? Let’s explore.
1. The Importance of Data Visualization in Data Science
Imagine you’re working with millions of data points from customer purchases, social media interactions, or financial transactions. Analyzing raw numbers manually would be overwhelming.
That’s why visualization is crucial in data science:
Identifies trends and patterns – Instead of sifting through spreadsheets, you can quickly spot trends in a visual format.
Makes complex data understandable – Graphs, heatmaps, and dashboards simplify the interpretation of large datasets.
Enhances decision-making – Stakeholders can easily grasp insights and make data-driven decisions faster.
Saves time and effort – Instead of writing lengthy reports, an interactive dashboard tells the story in seconds.
Without tools like Tableau, data science would be limited to experts who can code and run statistical models. With Tableau, insights become accessible to everyone—from data scientists to business executives.
2. Why Tableau Stands Out in Data Science
A. User-Friendly and Requires No Coding
One of the biggest advantages of Tableau is its drag-and-drop interface. Unlike Python or R, which require programming skills, Tableau allows users to create visualizations without writing a single line of code.
Even if you’re a beginner, you can:
✅ Upload data from multiple sources
✅ Create interactive dashboards in minutes
✅ Share insights with teams easily
This no-code approach makes Tableau ideal for both technical and non-technical professionals in data science.
B. Handles Large Datasets Efficiently
Data scientists often work with massive datasets—whether it’s financial transactions, customer behavior, or healthcare records. Traditional tools like Excel struggle with large volumes of data.
Tableau, on the other hand:
Can process millions of rows without slowing down
Optimizes performance using advanced data engine technology
Supports real-time data streaming for up-to-date analysis
This makes it a go-to tool for businesses that need fast, data-driven insights.
C. Connects with Multiple Data Sources
A major challenge in data science is bringing together data from different platforms. Tableau seamlessly integrates with a variety of sources, including:
Databases: MySQL, PostgreSQL, Microsoft SQL Server
Cloud platforms: AWS, Google BigQuery, Snowflake
Spreadsheets and APIs: Excel, Google Sheets, web-based data sources
This flexibility allows data scientists to combine datasets from multiple sources without needing complex SQL queries or scripts.
D. Real-Time Data Analysis
Industries like finance, healthcare, and e-commerce rely on real-time data to make quick decisions. Tableau’s live data connection allows users to:
Track stock market trends as they happen
Monitor website traffic and customer interactions in real time
Detect fraudulent transactions instantly
Instead of waiting for reports to be generated manually, Tableau delivers insights as events unfold.
E. Advanced Analytics Without Complexity
While Tableau is known for its visualizations, it also supports advanced analytics. You can:
Forecast trends based on historical data
Perform clustering and segmentation to identify patterns
Integrate with Python and R for machine learning and predictive modeling
This means data scientists can combine deep analytics with intuitive visualization, making Tableau a versatile tool.
3. How Tableau Helps Data Scientists in Real Life
Tableau has been adopted by the majority of industries to make data science more impactful and accessible. This is applied in the following real-life scenarios:
A. Analytics for Health Care
Tableau is deployed by hospitals and research institutions for the following purposes:
Monitor patient recovery rates and predict outbreaks of diseases
Analyze hospital occupancy and resource allocation
Identify trends in patient demographics and treatment results
B. Finance and Banking
Banks and investment firms rely on Tableau for the following purposes:
✅ Detect fraud by analyzing transaction patterns
✅ Track stock market fluctuations and make informed investment decisions
✅ Assess credit risk and loan performance
C. Marketing and Customer Insights
Companies use Tableau to:
✅ Track customer buying behavior and personalize recommendations
✅ Analyze social media engagement and campaign effectiveness
✅ Optimize ad spend by identifying high-performing channels
D. Retail and Supply Chain Management
Retailers leverage Tableau to:
✅ Forecast product demand and adjust inventory levels
✅ Identify regional sales trends and adjust marketing strategies
✅ Optimize supply chain logistics and reduce delivery delays
These applications show why Tableau is a must-have for data-driven decision-making.
4. Tableau vs. Other Data Visualization Tools
There are many visualization tools available, but Tableau consistently ranks as one of the best. Here’s why:
Tableau vs. Excel – Excel struggles with big data and lacks interactivity; Tableau handles large datasets effortlessly.
Tableau vs. Power BI – Power BI is great for Microsoft users, but Tableau offers more flexibility across different data sources.
Tableau vs. Python (Matplotlib, Seaborn) – Python libraries require coding skills, while Tableau simplifies visualization for all users.
This makes Tableau the go-to tool for both beginners and experienced professionals in data science.
5. Conclusion
Tableau has become an essential tool in data science because it simplifies data visualization, handles large datasets, and integrates seamlessly with various data sources. It enables professionals to analyze, interpret, and present data interactively, making insights accessible to everyone—from data scientists to business leaders.
If you’re looking to build a strong foundation in data science, learning Tableau is a smart career move. Many data science courses now include Tableau as a key skill, as companies increasingly demand professionals who can transform raw data into meaningful insights.
In a world where data is the driving force behind decision-making, Tableau ensures that the insights you uncover are not just accurate—but also clear, impactful, and easy to act upon.
3 notes · View notes
chaosciara · 1 year ago
Text
> be me
> is bored and just finished a probably terrible natural language processing assignment for stats
> realises that even if it’s bad i had fun doing wordclouds and sentiment analyses
> decides to keep the fun going and wonders what else i can rope into nlp hellscape
> gets dangerous glint in my eye as my tumblr blog shines at me
> decides to make a word cloud out of my tumblr tags
> starts the coding process but realises i need an API to access my blog through code
> tries to get API
> tumblr wants me to register an app that states why i want to access tumblr interface
> don’t know how to explain i just want to make a dumb colourful word graph of stupid shit i said on my tumblr in ‘23
> HELP ME
12 notes · View notes
mostlysignssomeportents · 1 year ago
Text
This day in history
Tumblr media
Tomorrow (November 29), I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Tumblr media
#15yrsago Peak Population: when will population growth stop, why, and how? https://www.alexsteffen.com/peak_population_and_sustainability
#15yrsago James Boyle’s “The Public Domain” — a brilliant copyfighter’s latest book, from a law prof who writes like a comedian https://memex.craphound.com/2008/11/29/james-boyles-the-public-domain-a-brilliant-copyfighters-latest-book-from-a-law-prof-who-writes-like-a-comedian/
#10yrsago NSA and Canadian spooks illegally spied on diplomats at Toronto G20 summit https://www.cbc.ca/news/politics/new-snowden-docs-show-u-s-spied-during-g20-in-toronto-1.2442448
#10yrsago New CC licenses: tighter, shorter, more readable, more global https://creativecommons.org/Version4/
#10yrsago Berlusconi kicked out of Italian senate https://www.theguardian.com/world/2013/nov/27/silvio-berlusconi-ousted-italian-parliament-tax-fraud-conviction
#5yrsago Sennheiser’s headphone drivers covertly changed your computer’s root of trust, leaving you vulnerable to undetectable attacks https://www.bleepingcomputer.com/news/security/sennheiser-headset-software-could-allow-man-in-the-middle-ssl-attacks/
#5yrsago New York City’s municipal debt collectors have forged an unholy alliance with sleazy subprime lenders https://www.bloomberg.com/confessions-of-judgment
#5yrsago Here’s how the Pentagon swindled Congress with $21 trillion worth of undocumented, untraceable, unaccounted for expenditures https://www.thenation.com/article/archive/pentagon-audit-budget-fraud/
#5yrsago The prosecutor who helped Jeffrey Epstein escape justice is now a Trump Cabinet member https://www.miamiherald.com/news/local/article220097825.html
#5yrsago Reddit takes a stand against the EU’s plan to break the internet https://www.redditinc.com/blog/the-eu-copyright-directive-what-redditors-in-europe-need-to-know/
#5yrsago The secret history of science fiction’s women writers: The Future is Female! https://memex.craphound.com/2018/11/29/the-secret-history-of-science-fictions-women-writers-the-future-is-female/
#5yrsago Redaction ineptitude reveals names of Proud Boys’ self-styled new leaders https://splinternews.com/proud-boys-failed-to-redact-their-new-dumb-bylaws-and-a-1830700905
#5yrsago Redaction ineptitude reveals Facebook’s 2012 plan to sell Graph API access to user data for $250,000 https://arstechnica.com/tech-policy/2018/11/facebook-pondered-for-a-time-selling-access-to-user-data/
#5yrsago Google engineer calls for a walkout over China censorship and raises $200K strike fund in hours https://twitter.com/lizthegrey/status/1068208484053856256
#5yrsago Correlates of Trump voting: searches for erectile dysfunction, hair loss, how to get girls, penis enlargement, penis size, steroids, testosterone and Viagra https://www.washingtonpost.com/news/monkey-cage/wp/2018/11/29/how-donald-trump-appeals-to-men-secretly-insecure-about-their-manhood/
#5yrsago Google’s secret project to build a censored Chinese search engine bypassed the company’s own security and privacy teams https://theintercept.com/2018/11/29/google-china-censored-search/
#5yrsago Mozilla pulls a popular paywall circumvention tool from Firefox add-ons store https://web.archive.org/web/20181130141509/https://github.com/iamadamdev/bypass-paywalls-firefox/issues/82
#1yrago The Big Four accounting firms are one (more) scandal away from collapse https://pluralistic.net/2022/11/29/great-andersens-ghost/#mene-mene-bezzle
14 notes · View notes
canmom · 1 year ago
Text
VR observations, 10 months in
I've been a game dev for 10 months now. It's pretty great, I'm enjoying it a lot, I get to spend my days doing crazy shader shit and animations and voxels and visual effects. Hopefully the game that will come out of all this will be one people enjoy, and in any case I'm learning so much that will eventually come back to the personal ~artistic~ side of things. I can't talk about that game just yet though (but soon it will be announced, I'm pretty sure). So this is a post about other games.
Mind you, I don't actually play very many VR games, or games in general these days, because I'm too busy developing the dang things. but sometimes I do! And I think it's interesting to talk about them.
These aren't really reviews as such. You could project all sorts of ulterior motives if it was. Like my livelihood does sorta depend on people buying VR headsets and then games on them. This is more just like things I observe.
Headsets
The biggest problem with VR at the moment is wearing a headset for too long kinda sucks. The weight of the headset is all effectively held on a lever arm and it presses on your face. However, this is heavily dependent on the strap you use to hold it to your head. A better balanced and cushioned strap can hold the headset still with less pressure and better balance the forces.
The strap that comes with the Quest 3 is absolute dogshit. So a big part of the reason I wouldn't play VR games for fun is because after wearing the headset for 30-60 minutes in the daily meeting, the absolute last thing I'd want to do is wear it any longer. Recently I got a new strap (a ~£25 Devaso one, the low end of straps), and it's markedly improved. It would probably be even better if I got one of the high end Bobo straps. So please take it from me: if you wanna get into VR, get a decent strap.
I hear the Apple Vision Pro is a lot more comfortable to wear for long periods, though I won't have a chance to try it until later this month.
During the time I've been working at Holonautic, Meta released their Quest 3, and more recently Apple released their hyper expensive Vision Pro for much fanfare.
The Quest 3 is a decent headset and probably the one I'd recommend if you're getting into VR and can afford a new console. It's not a massive improvement over the Quest 2 - the main thing that's better is the 'passthrough' (aka 'augmented reality', the mode where the 3D objects are composited into video of what's in front of you), which is now in full colour, and feels a lot less intrusive than the blown out greyscale that the Quest 2 did. But it still has some trouble with properly taking into account depth when combining the feeds from multiple cameras, so you get weird space warping effects when something in the foreground moves over something in the background.
The Vision Pro is by all accounts the bees knees, though it costs $3500 and already sold out, so good luck getting one. It brings a new interaction mode based on eye tracking, where you look at a thing with your eyes to select it like with a mouse pointer, and hold your hands in your lap and pinch to interact. Its passthrough is apparently miles ahead, it's got a laptop tier chip, etc etc. I'm not gonna talk about that though, if you want to read product reviews there are a million places you can do it.
Instead I wanna talk about rendering, since I think this is something that only gets discussed among devs, and maybe people outside might be interested.
Right now there is only one game engine that builds to the Vision Pro, which is Unity. However, Apple have their own graphics API, and the PolySpatial API used for the mixed reality mode is pretty heavily locked down in terms of what you can do.
So what Unity does is essentially run a transpilation step to map its own constructs into PolySpatial ones. For example, say you make a shader in Shader Graph (you have to use shader graph, it won't take HLSL shaders in general) - Unity will generate a vision pro compatible shader (in MaterialX format) from that. Vertex and fragment shaders mostly work, particle systems mostly don't, you don't get any postprocessing shaders, anything that involves a compute shader is right out (which means no VFX graph), Entities Graphics doesn't work. I don't think you get much control over stuff like batching. It's pretty limited compared to what we're used to on other platforms.
I said fragment shaders mostly work. It's true that most Shader Graph nodes work the same. However, if you're doing custom lighting calculations in a Unity shader, a standard way to do things is to use the 'main light' property provided by Unity. On the Vision Pro, you don't get a main light.
The Vision Pro actually uses an image-based lighting model, which uses the actual room around you to provide lighting information. This is great because objects in VR look like they actually belong in the space you're in, but it would of course be a huge security issue if all programs could get realtime video of your room, and I imagine the maths involved is pretty complex. So the only light information you get is a shader graph node which does a PBR lighting calculation based on provided parameters (albedo, normal, roughness, metallicity etc.). You can then instruct it to do whatever you want with the output of that inside the shader.
The upshot of this is that we have to make different versions of all our shaders for the Vision Pro version of the game.
Once the game is announced we'll probably have a lot to write about developing interactions for the vision pro vs the quest, so I'll save that for now. It's pretty fascinating though.
Anyway, right now I've still yet to wear a Vision Pro. Apple straight up aren't handing out devkits, we only have two in the company still, so mostly I'm hearing about things second hand.
Shores of Loci
Tumblr media
A few genres of VR game have emerged by now. Shooting and climbing are two pretty well-solved problems, so a lot of games involve that. But another one is 3D puzzles. This is something that would be incredibly difficult on a flat screen, where manipulating 3D objects is quite difficult, but becomes quite natural and straightforward in VR.
I've heard about one such game that uses 3D scans of real locations, but Shores of Loci is all about very environment artist authored levels, lots of grand sweeping vistas and planets hanging in the sky and so on. Basically you go through a series of locations and assemble teetering ramshackle buildings and chunks of landscape, which then grow really big and settle into the water. You can pull the pieces towards you with your hand, and then when you rotate them into roughly the right position and orientation relative to another piece, they snap together.
Tumblr media
It's diverting, if kinda annoying when you just can't find the place the piece should go - especially if the answer turns out to be that there's an intermediate piece that floated off somewhere. The environments are well-designed and appealing, it's cool to see the little guys appearing to inhabit them. That said it does kinda just... repeat that concept a bunch. The narrative is... there's a big stone giant who appears and gives you pieces sometimes. That's it basically.
Still, it's interesting to see the different environment concepts. Transitions have this very cool distorted sky/black hole effect.
However, the real thing that got me with this game, the thing that I'm writing about now, was the water. They got planar reflections working. On the Quest! This is something of a white whale for me. Doing anything that involves reading from a render texture is so expensive that it's usually a no-go, and yet here it's working great - planar reflections complete with natural looking distortion from ripples. There's enough meshes that I assume there must be a reasonably high number of draw calls, and yet... it's definitely realtime planar reflections, reflections move with objects, it all seems to work.
There's a plugin called Mirrors and Reflections for VR that provides an implementation, but so far my experience has been that the effect is too expensive (in terms of rendertime) to keep 72fps in a more complex scene. I kind of suspect the devs are using this plugin, but I'm really curious how they optimised the draw calls down hard enough to work with it, since there tends to be quite a bit going on...
Moss
This game's just straight up incredibly cute.
youtube
Third person VR games, where you interact with a character moving across a diorama-like level, are a tiny minority of VR games at the moment. I think it's a shame because the concept is fantastic.
Moss is a puzzle-platformer with light combat in a Redwall/Mouse Guard-like setting. The best part of Moss is 1000% interacting with your tiny little mousegirl, who is really gorgeously animated - her ears twitch, her tail swings back and forth, she tumbles, clambers, and generally moves in a very convincing and lifelike way.
Arguably this is the kind of game that doesn't need to be made in VR - we already have strong implementations of 'platformer' for flatscreen. What I think the VR brings in this case is this wonderful sense of interacting with a tiny 3D world like a diorama. In some ways it's sorta purposefully awkward - if Quill walks behind something, you get a glowing outline, but you might need to crane your neck to see her - but having the level laid out in this way as a 3D structure you can play with is really endearing.
Mechanically, you move Quill around with the analogue stick, and make her jump with the buttons, standard stuff. Various level elements can be pushed or pulled by grabbing them with the controllers, and you can also drag enemies around to make them stand on buttons, so solving a level is a combination of moving pieces of the level and then making Quill jump as appropriate.
The fact that you're instantiated in the level, separate from Quill, also adds an interesting wrinkle in terms of 'identification with player character'. In most third person games, you tend to feel that the player character is you to some degree. In Moss, it feels much more like Quill is someone I've been made responsible for, and I feel guilty whenever I accidentally make her fall off a cliff or something.
A lot is clearly designed around fostering that protective vibe - to heal Quill, you have to reach out and hold her with your hand, causing her to glow briefly. When you complete some levels, she will stop to give you a high five or celebrate with you. Even though the player is really just here as 'puzzle solver' and 'powerful macguffin', it puts some work in to make you feel personally connected to Quill.
Since the camera is not locked to the character, the controls are instead relative to the stage, i.e. you point the stick in the direction on the 2D plane you want Moss to move. This can make certain bits of platforming, like moving along a narrow ledge or tightrope, kinda fiddly. In general it's pretty manageable though.
The combat system is straightforward but solid enough. Quill has a three button string, and it can be cancelled into a dash using the jump button, and directed with the analogue stick. Enemies telegraph their attacks pretty clearly, so it's rarely difficult, but there's enough there to be engaging.
The game is built in Unreal, unlike most Quest games (almost all are made in Unity). It actually doesn't feel so very different though - likely because the lighting calculations that are cheap enough to run in Unity are the same ones that are cheap enough to run in Unreal. It benefits a lot from baked lighting. Some things are obvious jank - anything behind where the player is assumed to be sitting tends not to be modelled or textured - but the environments are in general very lively and I really like some of the interactions: you can slash through the grass and floating platforms rock as you jump onto them.
The story is sadly pretty standard high fantasy royalist chosen one stuff, nothing exciting really going on there. Though there are some very cute elements - the elf queen has a large frog which gives you challenges to unlock certain powers, and you can pet the frog, and even give it a high five. Basically all the small scale stuff is done really well, I just wish they'd put some more thought into what it's about. The Redwall/Mouse Guard style has a ton of potential - what sort of society would these sapient forest animals have? They just wanted a fairytale vibe though evidently.
Cutscene delivery is a weak point. You pull back into a cathedral-like space where you're paging through a large book, which is kinda cool, and listening to narration while looking at illustrations. In general I think these cutscenes would have worked better if you just stayed in the diorama world and watched the characters have animated interactions. Maybe it's a cost-saving measure. I guess having you turn the pages of the book is also a way to give you something to do, since sitting around watching NPCs talk is notoriously not fun in VR.
There are some very nice touches in the environment design though! In one area you walk across a bunch of human sized suits of armour and swords that are now rusting - nobody comments, but it definitely suggests that humans did exist in this world at some point. The actual puzzle levels tend to make less sense, they're very clearly designed as puzzles first and 'spaces people would live in' not at all, but they do tend to look pretty, and there's a clear sense of progression through different architectural areas - so far fairly standard forest, swamp, stone ruins etc. but I'll be curious to see if it goes anywhere weird with it later.
Weak story aside, I'm really impressed with Moss. Glad to see someone else giving third person VR a real shot. I'm looking forward to playing the rest of it.
...that's kinda all I played in a while huh. For example, I still haven't given Asgard's Wrath II, the swordfighting game produced internally at Meta that you get free on the Quest 3, a shot. Or Boneworks. I still haven't finished Half Life Alyx, even! Partly that's because the Quest 3 did not get on well with my long USB A to C cable - for some reason it only seems to work properly on a high quality C to C cable - and that restricts me from playing PCVR games that require too much movement. Still though...
Anyway, the game I've been working on these past 10 months should be ready to announce pretty soon. So I'm very excited for that.
9 notes · View notes
govindhtech · 6 months ago
Text
Open Platform For Enterprise AI Avatar Chatbot Creation
Tumblr media
How may an AI avatar chatbot be created using the Open Platform For Enterprise AI framework?
I. Flow Diagram
The graph displays the application’s overall flow. The Open Platform For Enterprise AI GenAIExamples repository’s “Avatar Chatbot” serves as the code sample. The “AvatarChatbot” megaservice, the application’s central component, is highlighted in the flowchart diagram. Four distinct microservices Automatic Speech Recognition (ASR), Large Language Model (LLM), Text-to-Speech (TTS), and Animation are coordinated by the megaservice and linked into a Directed Acyclic Graph (DAG).
Every microservice manages a specific avatar chatbot function. For instance:
Software for voice recognition that translates spoken words into text is called Automatic Speech Recognition (ASR).
By comprehending the user’s query, the Large Language Model (LLM) analyzes the transcribed text from ASR and produces the relevant text response.
The text response produced by the LLM is converted into audible speech by a text-to-speech (TTS) service.
The animation service makes sure that the lip movements of the avatar figure correspond with the synchronized speech by combining the audio response from TTS with the user-defined AI avatar picture or video. After then, a video of the avatar conversing with the user is produced.
An audio question and a visual input of an image or video are among the user inputs. A face-animated avatar video is the result. By hearing the audible response and observing the chatbot’s natural speech, users will be able to receive input from the avatar chatbot that is nearly real-time.
Create the “Animation” microservice in the GenAIComps repository
We would need to register a new microservice, such “Animation,” under comps/animation in order to add it:
Register the microservice
@register_microservice( name=”opea_service@animation”, service_type=ServiceType.ANIMATION, endpoint=”/v1/animation”, host=”0.0.0.0″, port=9066, input_datatype=Base64ByteStrDoc, output_datatype=VideoPath, ) @register_statistics(names=[“opea_service@animation”])
It specify the callback function that will be used when this microservice is run following the registration procedure. The “animate” function, which accepts a “Base64ByteStrDoc” object as input audio and creates a “VideoPath” object with the path to the generated avatar video, will be used in the “Animation” case. It send an API request to the “wav2lip” FastAPI’s endpoint from “animation.py” and retrieve the response in JSON format.
Remember to import it in comps/init.py and add the “Base64ByteStrDoc” and “VideoPath” classes in comps/cores/proto/docarray.py!
This link contains the code for the “wav2lip” server API. Incoming audio Base64Str and user-specified avatar picture or video are processed by the post function of this FastAPI, which then outputs an animated video and returns its path.
The functional block for its microservice is created with the aid of the aforementioned procedures. It must create a Dockerfile for the “wav2lip” server API and another for “Animation” to enable the user to launch the “Animation” microservice and build the required dependencies. For instance, the Dockerfile.intel_hpu begins with the PyTorch* installer Docker image for Intel Gaudi and concludes with the execution of a bash script called “entrypoint.”
Create the “AvatarChatbot” Megaservice in GenAIExamples
The megaservice class AvatarChatbotService will be defined initially in the Python file “AvatarChatbot/docker/avatarchatbot.py.” Add “asr,” “llm,” “tts,” and “animation” microservices as nodes in a Directed Acyclic Graph (DAG) using the megaservice orchestrator’s “add” function in the “add_remote_service” function. Then, use the flow_to function to join the edges.
Specify megaservice’s gateway
An interface through which users can access the Megaservice is called a gateway. The Python file GenAIComps/comps/cores/mega/gateway.py contains the definition of the AvatarChatbotGateway class. The host, port, endpoint, input and output datatypes, and megaservice orchestrator are all contained in the AvatarChatbotGateway. Additionally, it provides a handle_request function that plans to send the first microservice the initial input together with parameters and gathers the response from the last microservice.
In order for users to quickly build the AvatarChatbot backend Docker image and launch the “AvatarChatbot” examples, we must lastly create a Dockerfile. Scripts to install required GenAI dependencies and components are included in the Dockerfile.
II. Face Animation Models and Lip Synchronization
GFPGAN + Wav2Lip
A state-of-the-art lip-synchronization method that uses deep learning to precisely match audio and video is Wav2Lip. Included in Wav2Lip are:
A skilled lip-sync discriminator that has been trained and can accurately identify sync in actual videos
A modified LipGAN model to produce a frame-by-frame talking face video
An expert lip-sync discriminator is trained using the LRS2 dataset as part of the pretraining phase. To determine the likelihood that the input video-audio pair is in sync, the lip-sync expert is pre-trained.
A LipGAN-like architecture is employed during Wav2Lip training. A face decoder, a visual encoder, and a speech encoder are all included in the generator. Convolutional layer stacks make up all three. Convolutional blocks also serve as the discriminator. The modified LipGAN is taught similarly to previous GANs: the discriminator is trained to discriminate between frames produced by the generator and the ground-truth frames, and the generator is trained to minimize the adversarial loss depending on the discriminator’s score. In total, a weighted sum of the following loss components is minimized in order to train the generator:
A loss of L1 reconstruction between the ground-truth and produced frames
A breach of synchronization between the lip-sync expert’s input audio and the output video frames
Depending on the discriminator score, an adversarial loss between the generated and ground-truth frames
After inference, it provide the audio speech from the previous TTS block and the video frames with the avatar figure to the Wav2Lip model. The avatar speaks the speech in a lip-synced video that is produced by the trained Wav2Lip model.
Lip synchronization is present in the Wav2Lip-generated movie, although the resolution around the mouth region is reduced. To enhance the face quality in the produced video frames, it might optionally add a GFPGAN model after Wav2Lip. The GFPGAN model uses face restoration to predict a high-quality image from an input facial image that has unknown deterioration. A pretrained face GAN (like Style-GAN2) is used as a prior in this U-Net degradation removal module. A more vibrant and lifelike avatar representation results from prettraining the GFPGAN model to recover high-quality facial information in its output frames.
SadTalker
It provides another cutting-edge model option for facial animation in addition to Wav2Lip. The 3D motion coefficients (head, stance, and expression) of a 3D Morphable Model (3DMM) are produced from audio by SadTalker, a stylized audio-driven talking-head video creation tool. The input image is then sent through a 3D-aware face renderer using these coefficients, which are mapped to 3D key points. A lifelike talking head video is the result.
Intel made it possible to use the Wav2Lip model on Intel Gaudi Al accelerators and the SadTalker and Wav2Lip models on Intel Xeon Scalable processors.
Read more on Govindhtech.com
2 notes · View notes
dreamdolldeveloper · 1 year ago
Text
dreamdoll watchlist ★
youtube
key takeaways:
python is a good beginning coding language to start with
start with: variables, datatypes, loops, functions, if statements, oop
if you covered the basics, it would and should take you approx. 2 weeks.
first project: do something interesting/useful. start small.
simple games or a food recommendation system with specific ingredients
panda dataframe
use API = application programming interface = different pieces of software interacting with each other. grabbing data from another source
after your first project, learn about data structures and algorithms. how API works. learn how to read documentation.
dictionary
linkedlists
queues
heaps
trees
graphs
learn about more things and how to implement them into projects.
correct mindset:
implementation and application > theory and concepts knowing ≠ being able to do it
stay curious.
explore things outside of what is prescribed in a resource. that's how you learn about different concepts and how you deeply understand the concepts that you already know.
the best programmers they've met are the tinkers. these are the people who play around with their code and try a bunch of different things.
getting stuck:
it all comes down to problem solving. be comfortable with not knowing things and staying calm while trying to figure out the problems
how to learn even faster:
find a community where you work on projects together. you will learn so many things from other experiences programmers just by interacting with them. and accountability because you just can't give up
learning is never ending. you will always be learning something new.
12 notes · View notes
cyber-sec · 3 months ago
Text
𝗪𝗲𝗲𝗸𝗹𝘆 𝗠𝗮𝗹𝘄𝗮𝗿𝗲 & 𝗧𝗵𝗿𝗲𝗮𝘁𝘀 𝗥𝗼𝘂𝗻𝗱𝘂𝗽 | 𝟭𝟬 𝗙𝗲𝗯 - 𝟭𝟲 𝗙𝗲𝗯 𝟮𝟬𝟮𝟱
Tumblr media
1️⃣ 𝗙𝗜𝗡𝗔𝗟𝗗𝗥𝗔𝗙𝗧 𝗠𝗮𝗹𝘄𝗮𝗿𝗲 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗚𝗿𝗮𝗽𝗵 𝗔𝗣𝗜 FINALDRAFT is targeting Windows and Linux systems, leveraging Microsoft Graph API for espionage. Source: https://www.elastic.co/security-labs/fragile-web-ref7707
2️⃣ 𝗦𝗸𝘆 𝗘𝗖𝗖 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗼𝗿𝘀 𝗔𝗿𝗿𝗲𝘀𝘁𝗲𝗱 𝗶𝗻 𝗦𝗽𝗮𝗶𝗻 𝗮𝗻𝗱 𝗧𝗵𝗲 𝗡𝗲𝘁𝗵𝗲𝗿𝗹𝗮𝗻𝗱𝘀 Four distributors of the criminal-encrypted service Sky ECC were arrested in Spain and the Netherlands. Source: https://www.bleepingcomputer.com/news/legal/sky-ecc-encrypted-service-distributors-arrested-in-spain-netherlands/
3️⃣ 𝗔𝘀𝘁𝗮𝗿𝗼𝘁𝗵: 𝗡𝗲𝘄 𝟮𝗙𝗔 𝗣𝗵𝗶𝘀𝗵𝗶𝗻𝗴 𝗞𝗶𝘁 𝗧𝗮𝗿𝗴𝗲𝘁𝘀 𝗠𝗮𝗷𝗼𝗿 𝗘𝗺𝗮𝗶𝗹 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝗿𝘀 The Astaroth phishing kit is used to bypass 2FA and steal credentials from Gmail, Yahoo, AOL, O365, and third-party logins. Source: https://slashnext.com/blog/astaroth-a-new-2fa-phishing-kit-targeting-gmail-yahoo-aol-o365-and-3rd-party-logins/
4️⃣ 𝗥𝗮𝗻𝘀𝗼𝗺𝗛𝘂𝗯 𝗕𝗲𝗰𝗼𝗺𝗲𝘀 𝟮𝟬𝟮𝟰’𝘀 𝗧𝗼𝗽 𝗥𝗮𝗻𝘀𝗼𝗺𝘄𝗮𝗿𝗲 𝗚𝗿𝗼𝘂𝗽 RansomHub overtook competitors in 2024, hitting over 600 organisations worldwide. Source: https://www.group-ib.com/blog/ransomhub-never-sleeps-episode-1/
5️⃣ 𝗕𝗮𝗱𝗣𝗶𝗹𝗼𝘁 𝗖𝗮𝗺𝗽𝗮𝗶𝗴𝗻: 𝗦𝗲𝗮𝘀𝗵𝗲𝗹𝗹 𝗕𝗹𝗶𝘇𝘇𝗮𝗿𝗱 𝗧𝗮𝗿𝗴𝗲𝘁𝘀 𝗚𝗹𝗼𝗯𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀 The Seashell Blizzard subgroup runs a multiyear global operation for continuous access and data theft. Source: https://www.microsoft.com/en-us/security/blog/2025/02/12/the-badpilot-campaign-seashell-blizzard-subgroup-conducts-multiyear-global-access-operation/
Additional Cybersecurity News:
🟢 𝗔𝗽𝗽𝗹𝗲 𝗙𝗶𝘅𝗲𝘀 𝗔𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝗲𝗱 𝗭𝗲𝗿𝗼-𝗗𝗮𝘆 Apple patches a critical zero-day vulnerability affecting iOS devices. Source: https://www.techspot.com/news/106731-apple-fixes-another-actively-exploited-zero-day-vulnerability.html
🟠 𝗝𝗮𝗽𝗮𝗻 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝘀 "𝗔𝗰𝘁𝗶𝘃𝗲 𝗖𝘆𝗯𝗲𝗿 𝗗𝗲𝗳𝗲𝗻𝗰𝗲" 𝗕𝗶𝗹𝗹 Japan is moving towards offensive cybersecurity tactics with a new legislative push. Source: https://www.darkreading.com/cybersecurity-operations/japan-offense-new-cyber-defense-bill
🔴 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗡𝗩𝗜𝗗𝗜𝗔 𝗔𝗜 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗗𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝗲𝗱 A severe flaw in NVIDIA AI software has been discovered, enabling container escapes. Source: https://www.wiz.io/blog/nvidia-ai-vulnerability-deep-dive-cve-2024-0132
6 notes · View notes