#Ubuntu 18
Explore tagged Tumblr posts
mothcpu · 8 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
let's all quit the internet! (he/him)
2K notes · View notes
programming-fields · 1 year ago
Text
0 notes
netscapenavigator-official · 9 months ago
Text
Apparently the Zorin OS dev team has decided to forgo their former path of following the Ubuntu LTS kernels. v17 was based on 22.04 LTS, and Zorin 17 should’ve remained on that kernel (like all its predecessors had) until 18 came out, which would’ve switched to 24.04 LTS.
But they released 17.1 which was based on 23.10.
And today they just released 17.2 which is based on 24.04 LTS.
I don’t know why they’re doing this, but I’m not one to complain about a more up-to-date kernel, so yippee.
4 notes · View notes
master-wwweb · 1 year ago
Text
Guía de Informática -Grado 10
Hola chicos, subo la con la información de la clase.
Fecha: marzo 18 de 2024
Tema:
Reto Integrador: 1.        Evidenciar una página web interactiva y temática con contenido multimedia (según las indicaciones dadas previamente en sala)
2.           Crear una campaña sobre “Las Herramientas TIC y su importancia” infórmese y elabore la actividad en su página
Entrega: ____10 de ___ abril de 2024 (cualquier duda, consultar con su docente a tiempo, no el día que debe entregar)
Reto: Escoja uno de los siguientes sitios para elaborar su sitio web
     WIX
•    BLOGSPOT O BLOGGER
•    TUMBLR
•    WEBNODE
•    WEEBLY
•    WORDPRESS
•    MÉDIUM
•    OTROS
Si ya tiene uno, súbale contenido temático (temático significa que sea netamente académico enfocado a TECNOLOGIA E INFORMATICA).  
En su blog deben ir apareciendo semana a semana sus informes sobre la evidencia final
-Videos de la elaboración de su evidencia final (mientras realizan la actividad usted puede grabar y/o tomar fotos para ir evidenciando su proceso.
Nota: Cualquier duda, inquietud comunicármela a tiempo.  Quedo pendiente    
*************
-Programas operativos y aplicativos
Desde la perspectiva de la informática, un programa de aplicación consiste en una clase de software que se diseña con el fin de que para el usuario sea más sencilla la realización de un determinado trabajo. Esta particularidad lo distingue del resto de los programas, entre los cuales se pueden citar a los sistemas operativos.
-Los sistemas operativos son los que permiten el funcionamiento de la computadora, existen varios, tales como (Microsoft Windows - Mac OS X - GNU/Linux – UNIX – Solaris – FreeBSD - OpenBSD: Sistema operativo libre, - Google Chrome OS - Debian – Ubuntu – Mandriva – Sabayon – Fedora - Linpus Linux - Haiku (BeOS)
- Lenguajes de programación (aquellos que dan las herramientas necesarias para desarrollar los programas informáticos en general) y las utilidades (pensadas para realizar acciones de mantenimiento y tareas generales). Tales como (Java - C.- Python.- C++ - C# - Visual Basic. - JavaScript. – Php – Swift – SQL
El software es el elemento intangible y lógico que forma parte de una computadora. Es decir (Los programas se presentan como herramientas para mejorar tu desempeño. Algunos ejemplos de estos programas o aplicaciones son los procesadores de texto, como Microsoft Word; las hojas de cálculo, como Excel; y las bases de datos, como Microsoft Access.)
El hardware, en cambio, es el componente material y físico. Se dice que los sistemas operativos constituyen el lazo que une al software con el hardware.
En ocasiones, los programas de aplicación son diseñados a medida, es decir, según las necesidades y pretensiones de cada usuario. Por eso, el software permite resolver dificultades específicas. En otros casos, se trata de paquetes integrados que solucionan problemas generales e incluyen múltiples aplicaciones. Por ejemplo, un paquete de oficina combina aplicaciones como procesadores de textos y hojas de cálculo.
10. Herramientas para crear maquetas de interfaz de usuario en aplicaciones de software
Balsamiq Mockups. Balsamiq Mockups es una aplicación es muy divertida y sencilla de usar.
Mockingbird.
Mockup Builder.
MockFlow.
HotGloo.
Invision.
JustProto.
Proto.io.
Framer
Origami Studios
InVision
Reto  autónomo: Elabore una presentación en PowerPoint o video explicativo, sobre los programas operativos y aplicativos para el manejo de registros, textos, diagramas, figuras, planos constructivos, maquetas, modelos y prototipos con herramientas informáticas.
INFORME FINAL
El docente explica a sus estudiantes como elaborar su informe final, con enlaces directos a las actividades solicitadas durante el periodo.
Nota: También dejo la guía para descarga
9 notes · View notes
levahost · 1 year ago
Text
LEVAHOST AT&T USA Residential VPS and AT&T USA Residential Proxies Packages | Physical Dedicated Server + /24 Residential IP Rental packages...
Hello Reddit users,
AT&T virtual servers of LEVAHOST Information Technologies have been activated. Even if we do not directly host physical servers in the AT&T data center, we have implemented the service of providing you with the quality network structure of this data center with our VPS servers that we will allocate from our business partner in AT&T. Our AT&T Residential Proxy service has also been activated. We would like to tell you about the affordable prices and full, quality features of our AT&T Residential VPS, AT&T Residential Proxy packages. Our LEVAHOST AT&T Residential VPS and AT&T Residential Proxy packages are listed below. You can go to our site by reviewing and order the most suitable package for you.
LEVAHOST AT&T Residential VPS Packages:
USA LOCATION AT&T RESIDENTIAL VPS + 4 Real AT&T Residential IP Start From $ 77 / month 8 vCPU 6 GB RAM 60 GB SSD 4 Real Unshared Residential IP ( It is not a proxy. ) Unlimited Traffic 100 mbps Line 1 Gbps Port Speed Earnapp/Honeygain Supported 24/7 Support USA Location City: Ashburn Virginia or Chicago Opt. AT&T Residential Network-Free Opt.Windstream Res. Network–Free Selectable Operating Systems (Windows,Ubuntu,Debian,Centos)
USA LOCATION AT&T RESIDENTIAL VPS + 8 Real At&t Residential IP Start From $ 90 / month 10 vCPU 8 GB RAM 80 GB SSD 8 Real Unshared Residential IP ( It is not a proxy. ) Unlimited Traffic 100 mbps Line 1 Gbps Port Speed Earnapp/Honeygain Supported 24/7 Support USA Location City: Ashburn, Virginia or Chicago Opt. AT&T Residential Network-Free Opt.Windstream Res. Network–Free Selectable Operating Systems (Windows,Ubuntu,Debian,Centos)
BUY NOW: https://www.levahost.com/usa-location-residential-vps-residential-rdp/
LEVAHOST AT&T or Windstream Residential Proxy Packages:
USA LOCATION (AT&T) 3 x RESIDENTIAL PROXY Start From $ 18 / month The total price is for 3 pieces. You need to buy 3 pieces. 1 IPV4 Proxy Socks v4/v5 Connection Optionally USA Location (AT&T Operator) Optionally USA Location (Windstream Operator) Non-Shared IPv4 Address Upload / Download Speed ​​ between 150-300 Mbps Unlimited Traffic High Line Capacity 30-day usage right Multiple Pickup available
BUY NOW: https://www.levahost.com/usa-location-residential-proxy-static-ip-v4-residential-proxies-levahost-information-technology/
Best Regards.
3 notes · View notes
kelseychiang · 1 year ago
Text
WEEK 4 ESSAY - A FACET OF MY IDENTITY/ BACKGROUND
Being from South Africa has profoundly shaped my identity and background, influencing my perspectives, values, and sense of belonging. Growing up in the rainbow nation, I have been immersed in a diverse tapestry of cultures, languages, and traditions that have enriched my understanding of humanity and shaped my worldview. For 18 years I have grown up and learnt different perspectives.
One of the most significant aspects of my South African identity is the rich cultural diversity that fills every aspect of life. South Africa is a pot of cultures, with eleven official languages and a mixture of ethnicities, religions, and traditions. Growing up in this mosaic has exposed me to many perspectives and experiences, fostering a deep appreciation for multiculturalism and inclusivity. My South African background has instilled in me a profound respect for diversity and a curiosity about different cultures.
Furthermore, my South African identity is intertwined with a complex socio-political history that continues to shape the country's future. Coming of age in post-apartheid South Africa, I have witnessed the ongoing struggles and triumphs of our nation-building and reconciliation. The legacy of apartheid, with its deep-seated inequalities and injustices, serves as a constant reminder of the importance of social justice and human rights. My experiences of grappling with issues of race, identity, and privilege have educated me on working towards a more inclusive society.
Having said that, since I have moved to California, I have acknowledged the wisdom South Africa has given me. When comparing myself to those I have met here, I am able to see the different perspectives and general ignorance (in no bad way) when it comes to ‘what struggle actually is’. I do believe everywhere in the world has their own struggles but when comparing (I know I should not compare) I am grateful to be knowledgeable and responsible in representing South Africa and its characteristics.
In conclusion, being from South Africa has profoundly shaped my identity and background, leaving me with a deep appreciation for cultural diversity, exposure to a reconciling and building nation, and perspectives and wisdom you cannot get anywhere else. Embracing my South African upbringing has enriched my life in countless ways, shaping the person I am today and guiding my aspirations for the future. As I navigate the complexities of my new and modern world in California, I carry with me the lessons and values instilled by my South African upbringing, embracing diversity, resilience, and ‘ubuntu’ which is the spirit of humanity. (00:55am)
2 notes · View notes
gattsuru · 10 months ago
Text
Weird thing is it seems bizarrely incompetent.
The owners of the .ai TLD (Anguilla) are notorious for pulling domains back (without refund) for wide varieties of content
the owner(s) of the domain name bought it through NameCheap or a reseller buying through NameCheap (which is... better for small personal sites, if that).
But they don't have much on the DNS: no cname, no MX (meaning they can't receive e-mails), no DKIM or SPF (meaning they can't send to almost anyone).
The weird location problems are cloudflare being cloudflare; you get the 'best' Cloudflare cache, rather than the originating server. Normally the solution is to check DNS histories, but they've been using Cloudflare since at least May.
It's running NGINX (yay!), but an ancient version (1.18, which was outdated in late 2020) on Ubuntu (fuckers). Probably just a side effect of running Ubuntu LTS on a server, but this is why you don't do that on a serious machine.
The password prompt being funky is just because they used rather the wrong input 'type' HTML tag. That only effects rendering on the client, so don't use that to trust whether a site is doing something naughty with your info. (And don't trust this site with any of your passwords used elsewhere.)
Browsers not treating it as a normal sign in could be downstream of that, or because of the way it handles two different pages for e-mail and password. There's ways to fix the latter, but I'd have no idea where to start with this jank.
They have SSL set up (probably for to get boosted in search), but it's using Google Trust Services, which you shouldn't:
Tumblr media
the code looks like what I'd expect a self-taught first-year web dev student to put together, and not even in a 'they used an LLM's output directly' sorta way. It's not generic, it's just weird. Inline script tags near the end of the html document, but still in the body tags? What?
the images are in jpg format (does save bandwidth, but pretty harsh impact on image quality for these types of image. Maybe done to removes a lot of model information that AIGen tools put into the PNG Chunk info, but you can do that with a imagemagick script)
B/c of the that, I can't say the model for sure, but maybe a finetune for StableDiffusion 1.x? That'd be iffy choices in May: by now Flux.1 absolutely shreds these things, in particular often eliminating many of the 'AI telltales'.
It's got no online presence beyond bots. Even AIgen discords I'm in don't seem aware of it, it has no social media presence under its real name, so on.
The theft methodology is bizarre. Using img2img as a way to avoid getting caught by artists for something even aiGen proponents see as art theft has long been a concern and possibility, but like... this is pretty clearly lifted from this tumblr user -- there's absolutely no way that those tag combinations happened by accident. But process-wise, did they try to throw one image into CLIP interrogator, run a different image and resulting prompt through img2img at a low denoising ratio, and then just slap it up directly? That's slower, more readily detected, and gonna look worse than just running from a raw prompt, or the CLIP prompt itself! And here it's obviously worse than the source images or a new prompt.
((The resolutions for images are also odd, especially if they had to use image upscaling to make it work. That clone above is 1200x1200, for example, where the original image is 1000x1000. Low denoising ratios can sometimes let you get away with weird stuff, but that looks more like someone who either doesn't know what they're doing or set up a script with some weird assumptions.
Giving the site a normal prompt 'required' a 'sign-in' for the 18+ (hah), but produced a 896x1152 image, which is... not what any direct model usage or upscaling does as a default. The prompted image did not show up on the front page.
On the other hand 'login' is preserved... kinda, as a single cookie with a 'k' and 'uuid' value, with an expiry date of one year. Weird way of doing that, and not even in a security-problem sense. Passwords are checked, even though it doesn't give an error if you put in a wrong password -- which means they're being stored on the server, and I absooooolutely would not trust the dev to have implemented proper salting and hashing of those passwords.
They're probably using a style LoRA and/or preprompt on text prompts. An intentionally bad prompt didn't get a garbage output of the sort I'd expect from SD1.x or SDXL. Which is at least something, but means they're gonna hit token limits pretty early.
Normally, I'd say e-mail harvester scam, but the lack of confirmation e-mail (or way to send a confirmation email) means that whoever's running it is going to get a lot of e-mails that don't exist. Scamming people for password reuse is a little more plausible and would explain the lack of e-mail confirmation (low friction!), especially given the lack of other sign-in options, but the lack of other social media presence is weird.
Might just be some tech novice that thinks he or she has an idea they could eventually monetize, and no ethics on the way to get there -- it does notably say "beta" in a few spots. But then the social media spam and overt tag and image theft makes even less sense: even if they could come up with a model to sell imagegens (or ads to sell...), users wouldn't get anything like these outputs they're promoting.
It's Time To Investigate SevenArt.ai
sevenart.ai is a website that uses ai to generate images.
Except, that's not all it can do.
It can also overlay ai filters onto images to create the illusion that the algorithm created these images.
And its primary image source is Tumblr.
It scrapes through the site for recent images that are at least 10 days old and has some notes attached to it, as well as copying the tags to make the unsuspecting user think that the post was from a genuine user.
No image is safe. Art, photography, screenshots, you name it.
Initially I thought that these are bots that just repost images from their site as well as bastardizations of pictures across tumblr, until a user by the name of @nataliedecorsair discovered that these "bots" can also block users and restrict replies.
Not only that, but these bots do not procreate and multiply like most bots do. Or at least, they have.
The following are the list of bots that have been found on this very site. Brace yourself. It's gonna be a long one:
@giannaaziz1998blog
@kennedyvietor1978blog
@nikb0mh6bl
@z4uu8shm37
@xguniedhmn
@katherinrubino1958blog
@3neonnightlifenostalgiablog
@cyberneticcreations58blog
@neomasteinbrink1971blog
@etharetherford1958blog
@punxajfqz1
@camicranfill1967blog
@1stellarluminousechoblog
@whwsd1wrof
@bnlvi0rsmj
@steampunkstarshipsafari90blog
@surrealistictechtales17blog
@2steampunksavvysiren37blog
@krispycrowntree
@voucwjryey
@luciaaleem1961blog
@qcmpdwv9ts
@2mplexltw6
@sz1uwxthzi
@laurenesmock1972blog
@rosalinetritsch1992blog
@chereesteinkirchner1950blog
@malindamadaras1996blog
@1cyberneticdreamscapehubblog
@neomasteinbrink1971blog
@neonfuturecityblog
@olindagunner1986blog
@neonnomadnirvanablog
@digitalcyborgquestblog
@freespiritfusionblog
@piacarriveau1990blog
@3technoartisticvisionsblog
@wanderlustwineblissblog
@oyqjfwb9nz
@maryannamarkus1983blog
@lashelldowhower2000blog
@ovibigrqrw
@3neonnightlifenostalgiablog
@ywldujyr6b
@giannaaziz1998blog
@yudacquel1961blog
@neotechcreationsblog
@wildernesswonderquest87blog
@cybertroncosmicflow93blog
@emeldaplessner1996blog
@neuralnetworkgallery78blog
@dunstanrohrich1957blog
@juanitazunino1965blog
@natoshaereaux1970blog
@aienhancedaestheticsblog
@techtrendytreks48blog
@cgvlrktikf
@digitaldimensiondioramablog
@pixelpaintedpanorama91blog
@futuristiccowboyshark
@digitaldreamscapevisionsblog
@janishoppin1950blog
The oldest ones have been created in March, started scraping in June/July, and later additions to the family have been created in July.
So, I have come to the conclusion that these accounts might be run by a combination of bot and human. Cyborg, if you will.
But it still doesn't answer my main question:
Who is running the whole operation?
The site itself gave us zero answers to work with.
Tumblr media
No copyright, no link to the engine where the site is being used on, except for the sign in thingy (which I did.)
Tumblr media
I gave the site a fake email and a shitty password.
Tumblr media Tumblr media
Turns out it doesn't function like most sites that ask for an email and password.
Didn't check the burner email, the password isn't fully dotted and available for the whole world to see, and, and this is the important thing...
My browser didn't detect that this was an email and password thingy.
Tumblr media
And there was no log off feature.
This could mean two things.
Either we have a site that doesn't have a functioning email and password database, or that we have a bunch of gullible people throwing their email and password in for people to potentially steal.
I can't confirm or deny these facts, because, again, the site has little to work with.
The code? Generic as all hell.
Tumblr media
Tried searching for more information about this site, like the server it's on, or who owned the site, or something. ANYTHING.
Multiple sites pulled me in different directions. One site said it originates in Iceland. Others say its in California or Canada.
Luckily, the server it used was the same. Its powered by Cloudflare.
Unfortunately, I have no idea what to do with any of this information.
If you have any further information about this site, let me know.
Until there is a clear answer, we need to keep doing what we are doing.
Spread the word and report about these cretins.
If they want attention, then they are gonna get the worst attention.
12K notes · View notes
digitalmore · 5 days ago
Text
0 notes
campertenis · 10 days ago
Text
La persistencia en ventoy
Persistencia en ventoy con lubuntu 18 funciona muy bien el brave.appimge y mejor el brave.tar.xz pues instalo complementos de Chrome. 
Los archivos creados de persistencia pueden guardarse para restaurar tu configuración de Linux en cualquier otro ordenador.
1- Crear archivo de persistencia dentro del directorio del programa ventoy en Linux
sh CreatePersistentImg.sh -s 4096
2- Creamos un directorio llamado ventoy en la partición 1, y ponemos este archivo ventoy.json:
{
"persistence": [
{
"image": "/lubuntu-18.04.5-desktop-amd64.iso",
"backend": "/persistence.dat"
}
]
}
3- si queremos añadir otro Linux y tener dos uno de ellos archcraft descomprimir esto en la primera particion: persistence_ext4_2GB_vtoycow.dat.7z, para la 2 opcion con archcraft
{
"persistence": [
{
"image": "/kali.iso",
"backend": "/persistence.dat"
},
{
"image": "/distro-linux-2.iso",
"backend": "/persistencevtoycow.dat"
}
]
}
4- ubicación de lubuntu 18, del navegador brave y del vídeo explicativo:
5- instalación del navegador netsurf en Lubuntu18
cd Downloads
mkdir netsurf
cd netsurf
wget http://archive.ubuntu.com/ubuntu/pool/universe/n/netsurf/netsurf-common_3.6-3.2_all.deb
wget http://archive.ubuntu.com/ubuntu/pool/universe/n/netsurf/netsurf-gtk_3.6-3.2_amd64.deb
wget http://archive.ubuntu.com/ubuntu/pool/universe/n/netsurf/netsurf_3.6-3.2_all.deb
sudo apt install ./netsurf-common_3.6-3.2_all.deb
sudo apt install ./netsurf-gtk_3.6-3.2_amd64.deb
sudo apt install ./netsurf_3.6-3.2_all.deb
0 notes
renatoferreiradasilva · 27 days ago
Text
PROJETO
Passo a Passo da Implementação da NeoSphere
1. Configuração do Ambiente de Desenvolvimento
Ferramentas Necessárias:
Python 3.10+ para backend Web2 (FastAPI, Redis).
Node.js 18+ para serviços Web3 e frontend.
Solidity para smart contracts.
Docker para conteinerização de serviços (Redis, MongoDB, RabbitMQ).
Truffle/Hardhat para desenvolvimento de smart contracts.
# Instalação de dependências básicas (Linux/Ubuntu) sudo apt-get update sudo apt-get install -y python3.10 nodejs npm docker.io
2. Implementação da API Web2 com FastAPI
Estrutura do Projeto:
/neosphere-api ├── app/ │ ├── __init__.py │ ├── main.py # Ponto de entrada da API │ ├── models.py # Modelos Pydantic │ └── database.py # Conexão com MongoDB └── requirements.txt
Código Expandido (app/main.py):
from fastapi import FastAPI, Depends, HTTPException from pymongo import MongoClient from pymongo.errors import DuplicateKeyError from app.models import PostCreate, PostResponse from app.database import get_db import uuid import datetime app = FastAPI(title="NeoSphere API", version="0.2.0") @app.post("/posts/", response_model=PostResponse, status_code=201) async def create_post(post: PostCreate, db=Depends(get_db)): post_id = str(uuid.uuid4()) post_data = { "post_id": post_id, "user_id": post.user_id, "content": post.content, "media_urls": post.media_urls or [], "related_nft_id": post.related_nft_id, "created_at": datetime.datetime.utcnow(), "likes": 0, "comments_count": 0 } try: db.posts.insert_one(post_data) except DuplicateKeyError: raise HTTPException(status_code=400, detail="Post ID já existe") return post_data @app.get("/posts/{post_id}", response_model=PostResponse) async def get_post(post_id: str, db=Depends(get_db)): post = db.posts.find_one({"post_id": post_id}) if not post: raise HTTPException(status_code=404, detail="Post não encontrado") return post
3. Sistema de Cache com Redis para NFTs
Implementação Avançada (services/nft_cache.py):
import redis from tenacity import retry, stop_after_attempt, wait_fixed from config import settings class NFTCache: def __init__(self): self.client = redis.Redis( host=settings.REDIS_HOST, port=settings.REDIS_PORT, decode_responses=True ) @retry(stop=stop_after_attempt(3), wait=wait_fixed(0.5)) async def get_metadata(self, contract_address: str, token_id: str) -> dict: cache_key = f"nft:{contract_address}:{token_id}" cached_data = self.client.get(cache_key) if cached_data: return json.loads(cached_data) # Lógica de busca na blockchain metadata = await BlockchainService.fetch_metadata(contract_address, token_id) if metadata: self.client.setex( cache_key, settings.NFT_CACHE_TTL, json.dumps(metadata) ) return metadata def invalidate_cache(self, contract_address: str, token_id: str): self.client.delete(f"nft:{contract_address}:{token_id}")
4. Smart Contract para NFTs com Royalties (Arquivo Completo)
Contrato Completo (contracts/NeoSphereNFT.sol):
// SPDX-License-Identifier: MIT pragma solidity ^0.8.20; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import "@openzeppelin/contracts/access/Ownable.sol"; import "@openzeppelin/contracts/interfaces/IERC2981.sol"; contract NeoSphereNFT is ERC721, Ownable, IERC2981 { using Counters for Counters.Counter; Counters.Counter private _tokenIdCounter; struct RoyaltyInfo { address recipient; uint96 percentage; } mapping(uint256 => RoyaltyInfo) private _royalties; mapping(uint256 => string) private _tokenURIs; event NFTMinted( uint256 indexed tokenId, address indexed owner, string tokenURI, address creator ); constructor() ERC721("NeoSphereNFT", "NSPH") Ownable(msg.sender) {} function mint( address to, string memory uri, address royaltyRecipient, uint96 royaltyPercentage ) external onlyOwner returns (uint256) { require(royaltyPercentage <= 10000, "Royalties max 100%"); uint256 tokenId = _tokenIdCounter.current(); _tokenIdCounter.increment(); _safeMint(to, tokenId); _setTokenURI(tokenId, uri); _setRoyaltyInfo(tokenId, royaltyRecipient, royaltyPercentage); emit NFTMinted(tokenId, to, uri, msg.sender); return tokenId; } function royaltyInfo( uint256 tokenId, uint256 salePrice ) external view override returns (address, uint256) { RoyaltyInfo memory info = _royalties[tokenId]; return ( info.recipient, (salePrice * info.percentage) / 10000 ); } function _setTokenURI(uint256 tokenId, string memory uri) internal { _tokenURIs[tokenId] = uri; } function _setRoyaltyInfo( uint256 tokenId, address recipient, uint96 percentage ) internal { _royalties[tokenId] = RoyaltyInfo(recipient, percentage); } }
5. Sistema de Pagamentos com Gateway Unificado
Implementação Completa (payment/gateway.py):
from abc import ABC, abstractmethod from typing import Dict, Optional from pydantic import BaseModel class PaymentRequest(BaseModel): amount: float currency: str method: str user_metadata: Dict payment_metadata: Dict class PaymentProvider(ABC): @abstractmethod def process_payment(self, request: PaymentRequest) -> Dict: pass class StripeACHProvider(PaymentProvider): def process_payment(self, request: PaymentRequest) -> Dict: # Implementação real usando a SDK do Stripe return { "status": "success", "transaction_id": "stripe_tx_123", "fee": request.amount * 0.02 } class NeoPaymentGateway: def __init__(self): self.providers = { "ach": StripeACHProvider(), # Adicionar outros provedores } def process_payment(self, request: PaymentRequest) -> Dict: provider = self.providers.get(request.method.lower()) if not provider: raise ValueError("Método de pagamento não suportado") # Validação adicional if request.currency not in ["USD", "BRL"]: raise ValueError("Moeda não suportada") return provider.process_payment(request) # Exemplo de uso: # gateway = NeoPaymentGateway() # resultado = gateway.process_payment(PaymentRequest( # amount=100.00, # currency="USD", # method="ACH", # user_metadata={"country": "US"}, # payment_metadata={"account_number": "..."} # ))
6. Autenticação Web3 com SIWE
Implementação no Frontend (React):
import { useSigner } from 'wagmi' import { SiweMessage } from 'siwe' const AuthButton = () => { const { data: signer } = useSigner() const handleLogin = async () => { const message = new SiweMessage({ domain: window.location.host, address: await signer.getAddress(), statement: 'Bem-vindo à NeoSphere!', uri: window.location.origin, version: '1', chainId: 137 // Polygon Mainnet }) const signature = await signer.signMessage(message.prepareMessage()) // Verificação no backend const response = await fetch('/api/auth/login', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message, signature }) }) if (response.ok) { console.log('Autenticado com sucesso!') } } return ( <button onClick={handleLogin}> Conectar Wallet </button> ) }
7. Estratégia de Implantação
Infraestrutura com Terraform:
# infra/main.tf provider "aws" { region = "us-east-1" } module "neosphere_cluster" { source = "terraform-aws-modules/ecs/aws" cluster_name = "neosphere-prod" fargate_capacity_providers = ["FARGATE"] services = { api = { cpu = 512 memory = 1024 port = 8000 } payment = { cpu = 256 memory = 512 port = 3000 } } } resource "aws_elasticache_cluster" "redis" { cluster_id = "neosphere-redis" engine = "redis" node_type = "cache.t3.micro" num_cache_nodes = 1 parameter_group_name = "default.redis6.x" }
Considerações Finais
Testes Automatizados:
Implementar testes end-to-end com Cypress para fluxos de usuário
Testes de carga com k6 para validar escalabilidade
Testes de segurança com OWASP ZAP
Monitoramento:
Configurar Prometheus + Grafana para métricas em tempo real
Integrar Sentry para captura de erros no frontend
CI/CD:
Pipeline com GitHub Actions para deploy automático
Verificação de smart contracts com Slither
Documentação:
Swagger para API REST
Storybook para componentes UI
Archimate para documentação de arquitetura
Este esqueleto técnico fornece a base para uma implementação robusta da NeoSphere, combinando as melhores práticas de desenvolvimento Web2 com as inovações da Web3.
0 notes
ubuntu-server · 28 days ago
Text
Ubuntu Weekly Newsletter Issue 893
Welcome to the Ubuntu Weekly Newsletter, Issue 893 for the week of May 18 – 24, 2025. The full version of this issue is available here. In this issue we cover: Sunsetting Launchpad’s mailing lists Installing chrony by default, to enable Network Time Security (NTS) Call for nominations: Developer Membership Board restaffing Welcome New Members and Developers Ubuntu Stats Hot in Support Rocks…
0 notes
codebriefly · 2 months ago
Photo
Tumblr media
New Post has been published on https://codebriefly.com/building-and-deploying-angular-19-apps/
Building and Deploying Angular 19 Apps
Tumblr media
Efficiently building and deploying Angular 19 applications is crucial for delivering high-performance, production-ready web applications. In this blog, we will cover the complete process of building and deploying Angular 19 apps, including best practices and optimization tips.
Table of Contents
Toggle
Why Building and Deploying Matters
Preparing Your Angular 19 App for Production
Building Angular 19 App
Key Optimizations in Production Build:
Configuration Example:
Deploying Angular 19 App
Deploying on Firebase Hosting
Deploying on AWS S3 and CloudFront
Automating Deployment with CI/CD
Example with GitHub Actions
Best Practices for Building and Deploying Angular 19 Apps
Final Thoughts
Why Building and Deploying Matters
Building and deploying are the final steps of the development lifecycle. Building compiles your Angular project into static files, while deploying makes it accessible to users on a server. Proper optimization and configuration ensure faster load times and better performance.
Preparing Your Angular 19 App for Production
Before building the application, make sure to:
Update Angular CLI: Keep your Angular CLI up to date.
npm install -g @angular/cli
Optimize Production Build: Enable AOT compilation and minification.
Environment Configuration: Use the correct environment variables for production.
Building Angular 19 App
To create a production build, run the following command:
ng build --configuration=production
This command generates optimized files in the dist/ folder.
Key Optimizations in Production Build:
AOT Compilation: Reduces bundle size by compiling templates during the build.
Tree Shaking: Removes unused modules and functions.
Minification: Compresses HTML, CSS, and JavaScript files.
Source Map Exclusion: Disables source maps for production builds to improve security and reduce file size.
Configuration Example:
Modify the angular.json file to customize production settings:
"configurations": "production": "optimization": true, "outputHashing": "all", "sourceMap": false, "namedChunks": false, "extractCss": true, "aot": true, "fileReplacements": [ "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" ]
    Deploying Angular 19 App
Deployment options for Angular apps include:
Static Web Servers (e.g., NGINX, Apache)
Cloud Platforms (e.g., AWS S3, Firebase Hosting)
Docker Containers
Serverless Platforms (e.g., AWS Lambda)
Deploying on Firebase Hosting
Install Firebase CLI:
npm install -g firebase-tools
Login to Firebase:
firebase login
Initialize Firebase Project:
firebase init hosting
Deploy the App:
firebase deploy
Deploying on AWS S3 and CloudFront
Build the Project:
ng build --configuration=production
Upload to S3:
aws s3 sync ./dist/my-app s3://my-angular-app
Configure CloudFront Distribution: Set the S3 bucket as the origin.
Automating Deployment with CI/CD
Setting up a CI/CD pipeline ensures seamless updates and faster deployments.
Example with GitHub Actions
Create a .github/workflows/deploy.yml file:
name: Deploy Angular App on: [push] jobs: build-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Node.js uses: actions/setup-node@v2 with: node-version: '18' - run: npm install - run: npm run build -- --configuration=production - name: Deploy to S3 run: aws s3 sync ./dist/my-app s3://my-angular-app --delete
Best Practices for Building and Deploying Angular 19 Apps
Optimize for Production: Always use AOT and minification.
Use CI/CD Pipelines: Automate the build and deployment process.
Monitor Performance: Utilize tools like Lighthouse to analyze performance.
Secure the Application: Enable HTTPS and configure secure headers.
Cache Busting: Use hashed filenames to avoid caching issues.
Containerize with Docker: Simplifies deployments and scales easily.
Final Thoughts
Building and deploying Angular 19 applications efficiently can significantly enhance performance and maintainability. Following best practices and leveraging cloud hosting services ensure that your app is robust, scalable, and fast. Start building your next Angular project with confidence!
Keep learning & stay safe 😉
You may like:
Testing and Debugging Angular 19 Apps
Performance Optimization and Best Practices in Angular 19
UI/UX with Angular Material in Angular 19
0 notes
hersongfan · 3 months ago
Text
蜘蛛池搭建的最佳服务器配置是什么?TG@yuantou2048
在搭建蜘蛛池时,选择合适的服务器配置至关重要。这不仅关系到蜘蛛池的运行效率,还直接影响到SEO优化的效果。以下是一些建议的最佳服务器配置:
1. 处理器:选择多核处理器可以显著提高处理能力。建议至少使用8核或以上的CPU,以确保能够同时处理多个任务。
2. 内存:充足的内存是保证蜘蛛池高效运行的基础。推荐使用16GB或更高的RAM,这样可以更好地支持多线程操作和数据处理。
3. 存储:选择SSD固态硬盘作为存储介质,因为其读写速度远高于传统机械硬盘,有助于提升爬虫程序的执行效率。根据需求,可以选择更大的存储空间来存储爬取的数据和日志文件。
4. 带宽:高带宽可以加快数据传输速度,减少延迟。对于大型项目,建议使用1Gbps或更高速度的网络连接。
5. 操作系统:Linux系统因其稳定性和灵活性而被广泛采用。Ubuntu或CentOS都是不错的选择。
6. 磁盘空间:根据项目规模选择合适的磁盘空间。如果预计会有大量数据需要存储,建议配备至少500GB的存储空间,并考虑使用RAID阵列以提高数据读写速度。
7. 网络环境:稳定的网络环境是必需的。确保服务器所在数据中心提供足够的带宽支持,避免因网络瓶颈导致的性能下降。
8. 安全性:确保服务器的安全性,防止被攻击者利用。定期更新系统补丁,安装防火墙软件保护服务器安全。
9. 负载均衡:使用负载均衡技术分散请求负载,提高响应速度。
10. 备份方案:定期备份数据,以防意外丢失重要信息。
11. 云服务提供商:选择信誉良好的云服务商,如阿里云、腾讯云等,它们提供了强大的安全保障措施以及快速访问速度。
12. 监控工具:部署有效的监控工具来监控服务器状态,及时发现并解决问题。
13. 数据库管理:合理规划数据库结构,优化查询性能。
14. 防火墙设置:合理配置防火墙规则,防止恶意攻击。
15. 维护与更新:定期检查系统健康状况,及时更新软件版本,保持系统最新状态。
16. 技术支持:选择具有良好客户支持的服务商,以便在出现问题时能够迅速得到帮助。
17. 成本效益比:评估不同供应商提供的方案,找到性价比高的产品。
18. 扩展性:随着业务增长,可能需要增加硬件资源。因此,在初始阶段就考虑到未来可能的增长需求。
19. 自动化脚本:编写自动化脚本进行日常维护工作,降低运维成本。
20. 其他注意事项:关注服务器的地理位置,选择靠近目标网站的位置可以减少延迟时间。
希望以上信息对你有所帮助!如果有任何疑问,请随时联系我。
加飞机@yuantou2048
Tumblr media
BCH Miner
CESUR Mining
0 notes
nullset2 · 3 months ago
Text
I'm a dumbass
I have a Turing Pi V2 with a Jetson Nano and it's the bees knees but I wanted to clear out some space, so I removed the graphical environment and other packages from Linux4Tegra (which is what NVIDIA offers, basically repackaged Ubuntu 18 LTS with their drivers and facilities, which have been historically NVIDIA's weak point in Linux land).
But the tutorial says "remove network-manager and then reinstall it", and I thought "well, I'll follow the tutorial", foolishly unaware that I had disabled dhcpd earlier in the week due to another issue, so I had no backup.
I came down from 92% usage to 60% which is all fine and dandy but I lost the machine. I can't SSH to it now.
I reinstalled network manager but even so, the machine didn't come back up on the network after reboot, so now I'm going to have to reflash the OS on the machine, which is a pain. NVIDIA's support is a joke.
Funnily enough, the machine does boot and it does prompt me to login if I hook up a monitor to it, but for some God forsaken reason it is not possible to use USB keyboards on the Turing Pi V2 for Jetson Nanos due to a limitation. If I had some way to actually login, I could easily fix this problem by re-enabling dhcpd.
0 notes
holzwerkerblog-com · 6 months ago
Text
neuer Anschlag für den Frästisch
Tumblr media
neuer Anschlag für den Frästisch
Dies ist bereits ein etwas älteres Projekt. Der Initiator war Heiko Rech, dessen Blog zu jener Zeit recht erfolgreich war. Leider ist dieser Beitrag im Laufe der Zeit aus meinem Blog verschwunden.  Hier ist noch der alte Anschlag des Festool Frästisches im Original zu sehen. Für die notwendigen Arbeiten ist dieser Anschlag ausreichend, jedoch fehlen wichtige Hilfsmittel, die der neue Anschlag bietet.  
erforderlicher Rahmen
Der neue Fräsanschlag erfordert eine andere Befestigung am Frästisch. Diese hatte ich schon vorher gebaut und montiert. Ahorn war die beste Wahl. Ahorn lässt sich sehr gut bearbeiten. Aluschienen gibt es im Internet für wenig Geld.
Schnittplan Multiplex 18mm
Meine Konstruktionsarbeiten mache ich mit Sketchup. Ich habe noch die alte Version, die eigenständig unter Windows läuft. Es wäre schön, wenn ich die Arbeiten auf meinem UBUNTU-Computer machen könnte. Aber ich bin nicht bereit, den Preis dafür zu zahlen. Hier sind die wichtigsten Teile mit Maßangaben.
Verbindungen
Bei dieser Konstruktion verwende ich kein Massivholz. 18 mm Multiplex ist das richtige Material für dieses Projekt. Es ist formstabil und verzieht sich kaum. Die Langstücke waren einfach auf dem MFT/3 zuzuschneiden; die Ecken auf der Kappsäge. Alle Verbindungen sind Lamello-Verbindungen. Die Lamello Zeta-P2 hat sich wirklich bezahlt gemacht. Absolut präzise und für alle Lamello-Verbindungen einsetzbar.
Absaugkasten
Wie beim Original von Festool ist hier ein ausreichend dimensionierter Absaugkasten eingebaut. Der Zuschnitt dafür ist etwas knifflig, aber das Endergebnis überzeugt. Die Winkelschnitte haben alle 22,5 Grad. Die Verbindungen sind alle nur verleimt. Der Kasten muss nichts tragen und steht beim Absaugen im Unterdruck.
Deckel Absaugkasten
Hierfür ist kein 18mm starkes Holz nötig.  &mm Sperrholz reichen aus. KEIN MDF !!
Passprobe
Die Teile für den Absaugkasten habe ich zum Verleimen mit Paketklebeband fixiert. Klebebänder sind beim Verleimen von Gehrungen immer sehr hilfreich. Beim Ausprobieren kann man sehr gut feststellen, ob alle rechten Winkel eingehalten werden können und ob alle Lamellen an der richtigen Stelle sitzen.
Zentrierung
Wenn die Frontteile ausgerichtet und verleimt sind, wird das "Zentrierbrett" montiert. Dieses hält die beiden Backen in einer sehr stabilen und zum Frästisch ausgerichteten Position. Dies ist das schwarze Teil im Hintergrund. Hier habe ich einige Lamellos platziert um eine kraftvolle Verbindung zu den beiden Anschlägen zu schaffen.
Verleimen
Da war ich etwas vorsichtiger als sonst. Der rechte Winkel muss immer stimmen. Ich habe auch tatsächlich eine Stunde gewartet, bis ich die Zwingen wieder abgebaut habe. Sonst war es meistens nur eine halbe Stunde.
Alu-Anschläge
Diese Anschläge sind wirklich gut. Genau für diesen Zweck. Die Innenseiten müssen aber noch auf 45 Grad geschnitten werden. Das geht ganz einfach mit der Kappsäge. Drehzahl auf niedrigster Stufe und sehr wenig Vorschub.
erster Test auf dem Frästisch
Alle Teile waren verleimt, die Knebelschrauben alle drin und die Führungsschrauben auf dem Tisch passen auch. Heiko Rech hat andere Klemmen, die vielleicht für die Befestigung auf dem Tisch fester sein könnten. Die kleinen Knebel sind doch schwer anzuziehen. Aber es hält und die erste Fräsung war sehr gut.
Zubehör für den Frästisch
Auch die Zubehörteile, die Heiko Rech beschrieben hatte, habe ich nachgebaut. Allerdings die Federbretter aus Holz habe ich gleich entsorgt. Für wenig Geld gibt es die Folgenden zu kaufen. Die sind gut. Read the full article
0 notes
enigmaincrimson-personal · 10 months ago
Text
Internet rebooted just when I finished that post... Scared me for a bit.
To be honest, transitional periods are always messy, even if you somehow preempt them.
Like... It turns out that you have to replace the battery in a phone every 18 to 24 months, but unless you have the proper tools or a repair shop is a available in the area, there's no way to just pop the thing open and put in a new one as they're basically sealed containers.
Most android phones have an 18 month support lifespan with Samsung's 5 year and Google's 7 year being the exception...
Sure, you can do some weird hacky stuff to install Windows 11 on an unsupported old laptop, but even Microsoft admits that is basically a ticking time bomb. Win 10 has only a little over a year left to live at this point as well... And the cost of extended support exceeds the cost of the machine.
Providing the thing doesn't start to come apart on its own before then. Old rubber seams tend to give way and crumble with age after all.
And there's even weirder forms of support lost... While Open Source drivers are technically forever, it just takes some very poor timing to make life miserable for years.
You see... This particular laptop is a dual GPU model... With an Intel HD 4600 running most of the time and an Nvidia GTX 950M that can take over if needed.
First, the Intel GPU is one hardware revision short of proper vulkan support, so running Proton for Windows gaming on Linux on it is... Some what questionable. It isn't what I'd call fast either.
The Nvidia GPU has full Vulkan support, but... You have two main issues... The drivers and getting applications to utilize the GPU when needed.
By default, they use the Noveau drivers, which does technically work, but runs like a car stuck on first gear... And I'm not sure if the switchover works at all. It will catch up eventually, but it could be years.
The proprietary GeForce drivers are fully functional, but unless something changes in later drivers, I'm stuck with the older Proprietary Kernel hooks... Which makes addressing the GPU more complicated than it should be. And while there are official Open Source Kernel hooks now, they only work for newer Nvidia GPUs.
The proprietary kernel hooks cause all sort of problems... Like it makes updating more complicated, the system isn't as fine tuned or integrated together as it should, and more. That also leads into the other annoyance... Optimus support.
As I can't address the secondary GPU directly, I have to use a supplied tool known as prime to get it to work... And as simple as Prime-run application-name might sound, it can be a hit or a miss.
Oh and I can't just run an up to date version of Ubuntu or Linux Mint either... Since the kernel version they use doesn't like my wifi adapter... And even more frustrating is that it's like one version number off from being compatible.
Otherwise, it just shuts off after a few minutes... Gotta be 5.9 or greater, but they use 5.8 by default.
To make it even more annoying, the new Wayland window manager requires the Nvidia drivers to be set to 550 or higher... And those use 530ish by default.
I mean, the Noveau drivers do have Wayland support, but as I said... Stuck on first gear.
KDE on Wayland feels very comfortable to me, but... I have no idea how to address the secondary GPU reliably...
No idea if the backlight on the keyboard is gone because it's old or some sort of Asus firmware shenanigans.
I mean, it's ten years old at this point... And keeping this thing running is all I can afford unless some miracle happens.
0 notes