#drawImage
Explore tagged Tumblr posts
gtg3000 · 2 years ago
Text
Tumblr media Tumblr media Tumblr media
Abusing my works' low-code platform to make a raycaster dungeon crawler.
Sampling textures through the drawImage overload with all the parameters actually makes this realtime, except there isn't enough speed for floor texturing.
It's running in fengari that's stapled to react, so performance is limited.
Sprites exist, but there is no culling for them beyond the shadowmap, so they can be seen through walls in some situations.
Overall, a waste of time but a fun one.
2 notes · View notes
harmonyos-next · 3 months ago
Text
HarmonyOS NEXT Practical: Page Watermark
In software development, a watermark is a mark embedded in an application page, image, or document, typically presented in the form of text or graphics. Watermarks are typically used for the following purposes:
Source identification: can be used to identify the source or author of applications, various files, and ensure the ownership of property rights.
Copyright protection: It can carry copyright protection information, effectively preventing others from tampering, stealing, and illegal copying.
Artistic effect: It can be used as an artistic effect to add a unique style to images or applications.
Implementation idea:
Create a Canvas canvas and draw a watermark on it.
Use the floating layer overlay property to integrate the canvas with UI page components for display.
Knowledge points: Canvas provides canvas components for custom drawing of graphics. Use the CanvasRendering Context2D object to draw on the Canvas component, where the fillText() method is used to draw text and the drawImage() method is used to draw images.
Canvas.onReady Event callback when Canvas component initialization is completed or when Canvas component size changes. When the event is triggered, the canvas is cleared, and after the event, the width and height of the Canvas component are determined and available for drawing using Canvas related APIs. When the Canvas component only undergoes a positional change, only the onAreaChange event is triggered and the onReady event is not triggered. The onAreaChange event is triggered after the onReady event.
Canvas.hitTestBehavior Set the touch test type for the component. Default value: HitTestMode.Default
Implement the draw() method for drawing watermarks. The default starting point for drawing is the origin of the coordinate axis (upper left corner of the canvas). By translating and rotating the coordinate axis, watermarks can be drawn at different positions and angles on the canvas. If the watermark has a certain rotation angle, in order to ensure that the first watermark can be displayed completely, it is necessary to translate the starting point of the drawing, and the translation distance is calculated based on the rotation angle and the width and height of the watermark. Finally, the watermark text was drawn using the CanvasRendering Context2D. tilText() method. [code] fillText(text: string, x: number, y: number, maxWidth?: number): void [/code] Draw fill type text. text: The text content that needs to be drawn. x: The x-coordinate of the bottom left corner of the text to be drawn. Default Unit: vp。 y: The y-coordinate of the bottom left corner of the text to be drawn. Default Unit: vp。 maxWidth: Specify the maximum width allowed for the text. Default unit: vp. Default value: unlimited width.
Create watermark: BuildWatermark [code] @Builder export function BuildWatermark() { Watermark() .width('100%') .height('100%') }
@Component struct Watermark { private settings: RenderingContextSettings = new RenderingContextSettings(true); private context: CanvasRenderingContext2D = new CanvasRenderingContext2D(this.settings); @Prop watermarkWidth: number = 120; @Prop watermarkHeight: number = 120; @Prop watermarkText: string = '这是文字水印'; @Prop rotationAngle: number = -30; @Prop fillColor: string | number | CanvasGradient | CanvasPattern = '#10000000'; @Prop font: string = '16vp';
build() { Canvas(this.context) .width('100%') .height('100%') .hitTestBehavior(HitTestMode.Transparent) .onReady(() => this.draw()) }
draw() { this.context.fillStyle = this.fillColor; this.context.font = this.font; const colCount = Math.ceil(this.context.width / this.watermarkWidth); const rowCount = Math.ceil(this.context.height / this.watermarkHeight); for (let col = 0; col <= colCount; col++) { let row = 0; for (; row <= rowCount; row++) { const angle = this.rotationAngle * Math.PI / 180; this.context.rotate(angle); const positionX = this.rotationAngle > 0 ? this.watermarkHeight * Math.tan(angle) : 0; const positionY = this.rotationAngle > 0 ? 0 : this.watermarkWidth * Math.tan(-angle); this.context.fillText(this.watermarkText, positionX, positionY); this.context.rotate(-angle); this.context.translate(0, this.watermarkHeight); } this.context.translate(0, -this.watermarkHeight * row); this.context.translate(this.watermarkWidth, 0); } } } [/code] Use watermark: [code] import { BuildWatermark } from './BuildWatermark';
@Entry @Component struct WatermarkDemoPage { @State message: string = 'WatermarkDemo';
build() { Column() { Text(this.message) .fontWeight(FontWeight.Bold) } .height('100%') .width('100%') .overlay(BuildWatermark()) } } [/code]
0 notes
gima326 · 7 months ago
Text
Canvas 入門 その7
Tumblr media
画像の表示方法のパターン( ①〜③ )を練習。 JavaScript の場合、変数がグローバルスコープになるために起こる「グローバル汚染」に気をつけるべく、「即時関数」をもちいる必要があるのだけれど、Clojure では気にしなくてもいいはず。 …で、ここでは、オブジェクト(ページ、画像など)が関係するなんらかの処理を、オブジェクト自身に持たせる、という、ちょっと面倒くさい記述の練習をしている。 具体的には、なにかイベント(ここでは画像の "load" )を検出した時に実行する処理として、無名関数(コールバック関数)を設定していて、その処理のなかでは、引数としてなにか処理用の関数( "fnc" )が渡されていれば、その処理にオブジェクト自身を設定して実行する、という記述です(…ふぅ。あえての一行悪文ですが、長いねぇ)。 imgLoader では、 1.画像インスタンスを生成して、 2.「どのタイミングで実行するものか」だけを指定した、具体的な中身が未定の処理をインスタンスに設定し、 3.最後に、引数 path で具体的な画像とひもづけている ;;=================== (defn imgLoader [path fnc]  ;; 1.画像のインスタンスを生成する  (let [target (js/Image.)]   ;; 2.画像がロード完了したときの処理を先に記述する   (.addEventListener    target    "load"    (fn []     (if fnc      (fnc target))))   ;; 3.画像のロードを開始するためにパスを指定する   (set! (.-src target) path) )) ;;===================
(defn render6 [name]  (let [canvas (.querySelector js/document.body name)    ctx (.getContext canvas "2d")]   (imgLoader    "./img/color.jpg"    (fn [img]     (do      ;; パターン①:等倍      (.drawImage ctx img 100 100)      ;; パターン②:大きさ指定      (.drawImage ctx img 30 10 100 100)      ;; パターン③:一部を切り出し+大きさを指定      (.drawImage ctx img 10 10 110 110 10 130 50 50)     ) ) ) )) ;;===================
0 notes
amazing-jquery-plugins · 5 years ago
Text
Get The Average Color Of An Image - psColor
psColor is a tiny jQuery plugin that gets the average color of a given image by extracting and calculating the rgba data from the image using the Canvas drawImage() API.
Demo
Download
Tumblr media
2 notes · View notes
lowestcomdenom · 7 years ago
Photo
Tumblr media
Niddah-Zine. A zine commentary, translation, overwrite, overdraw, hamfisted, heavyhanded, flatfooted, alt-ac, queer-review, uterine emission. The Mishnah a compendium written in 3rd century Palestine, reimagined. But really just drawn out of context, word for word, the rabbis sought to categorize the uterine emissions of women, and of other species. They loved in a world on which one species could birth a different kind. Male gaze meets species/flesh/women’s bodies kicking back. #zine #niddah #mishnah #rabbis #midwives #artist #art #drawblood #drawimage #speculum #malegaze #cisgaze #queerreview #altac #worksonpaper #art #penandink #uterus #reproductivejustice #species #hybris #xerox https://www.instagram.com/p/BnM0m3YgCUJ/?utm_source=ig_tumblr_share&igshid=tf6o0kibcbpo
0 notes
perksjust · 3 years ago
Text
Anime anychart
Tumblr media
#ANIME ANYCHART FULL#
#ANIME ANYCHART CODE#
The bottom one uses shadowBlur on the canvas path, and the top one has no shadow blur. To create the glow effect, I drew two segments on top of each other. In drawComboGui first we clear the canvas from any previous data drawn onto it before drawing the current state. Like I mentioned earlier, the render function then calls drawComboGui on each requestAnimationFrame, ideally 60 times a second. On each requestAnimationFrame you make a draw call to the canvas to draw the state of the UI based on those objects. These values specify how the UI looks at any given point in time. The basic idea is that you use GSAP to change the properties of these objects over time. It enables you to animate pretty much anything! The trick is that the animation API accepts not only DOM/SVG objects but also arbitrary JS data structures, whose properties you can then “animate”. Luckily GSAP (aka Greensock AKA TweenMax AKA TweenLite) allows me to do just that.
#ANIME ANYCHART CODE#
This means that performance was nothing if not paramount, and I wanted to know exactly which code powers the animation. In addition, all of the animations had to run smoothly on top of playing video, on both desktop and mobile with varying capabilities and network conditions. This is because I needed to use several types of animations in the project: Canvas, SVG and CSS/DOM, and no one framework does it all. I was debating whether to use a canvas animation framework but ultimately decided against it. So far so good - but where’s the animation? Then I iterate over an array of segment objects and use their properties ( strokeStyle, lineWidth) to draw the actual segments with the canvas arc function. I defined a general options defined with some properties to play with later on like the number of segments, radius, width and so on. The gauge segments is where things get interesting. The image is the easy part, it’s just a trivial use of canvas’ drawImage. See the Pen Pistons Superpower: Structure by Opher Vishnia ( on CodePen.Įssentially, there are two main components here - the central image and the gauge segments. The basic structure of the Superpower is achieved with one Canvas element and a bit of simple geometry: At certain times in the game, the Superpower Gauge becomes active, notifying the user they can click it to make their in-game avatar perform a special move. The gauge sits at the top left corner of the screen, and its task is to convey to the user the amount of their combo points as denoted by the number of red segments. In 1on1, once the user succeeds making a move, they’re awarded combo points. This is the motion reference I used while implementing the animation, created in After Effects: In this article, I want to focus and show you how I implemented the animation for the Superpower Gauge using vanilla JS in conjunction with GSAP. It’s really amazing what we can do with just a browser these days. While developing the game I used many neat things like canvas, SVG and CSS animations, gesture recognition and a video stream that’s dynamically constructed on the fly. Throwing balls around is one thing throwing pixels around - now that’s finally a basketball challenge I can ace!
#ANIME ANYCHART FULL#
Now a tall and gangly developer, still bad at basketball, I was faced with a project: Designing and implementing a full motion video web basketball game for the NBA’s Detroit Pistons. Eventually, I realized, much to the dismay of aunts and other cheek-pinchers alike, that while occupying vertical real estate might give you an advantage in the art of basketball, it does not ensure it.įast forward 21 years later. So I practiced and practiced spending hours on the court of my elementary school. I was a gangly ten-year-old, and like any other relatively tall kid I was often addressed to by “you must be so good at basketball!”. The year was 1995 Toy Story hit the theaters, kids were obsessively collecting little cardboard circles and Kiss From a Rose was being badly sung by everyone.
Tumblr media
0 notes
eleasurecreativecoding · 4 years ago
Link
This is my project for the body tracking ML assignment. First, I made a maze using a bunch of different sized rectangles. Then, I created a small smiley face that tracks the nose so it can be guided through the maze. This project was a lot more intimidating than it actually was to do, but then again, my game is extremely simple. My process basically followed the tutorial that was linked in the assignment, which made it a lot easier! I did have trouble getting the circle to line up with the nose at first, which was frustrating, but then I realized that I needed to change the x and y positioning for the part that says “canvas.elt.getContext("2d").drawImage.”  I would like to go back and add some more interaction when I get a chance, because it’s kind of boring, but I am proud that I figured this out and got something done! 
0 notes
suzanneshannon · 5 years ago
Text
Nailing the Perfect Contrast Between Light Text and a Background Image
Have you ever come across a site where light text is sitting on a light background image? If you have, you’ll know how difficult that is to read. A popular way to avoid that is to use a transparent overlay. But this leads to an important question: Just how transparent should that overlay be? It’s not like we’re always dealing with the same font sizes, weights, and colors, and, of course, different images will result in different contrasts.
Trying to stamp out poor text contrast on background images is a lot like playing Whac-a-Mole. Instead of guessing, we can solve this problem with HTML <canvas> and a little bit of math.
Like this:
CodePen Embed Fallback
We could say “Problem solved!” and simply end this article here. But where’s the fun in that? What I want to show you is how this tool works so you have a new way to handle this all-too-common problem.
Here’s the plan
First, let’s get specific about our goals. We’ve said we want readable text on top of a background image, but what does “readable” even mean? For our purposes, we’ll use the WCAG definition of AA-level readability, which says text and background colors need enough contrast between them such that that one color is 4.5 times lighter than the other.
Let’s pick a text color, a background image, and an overlay color as a starting point. Given those inputs, we want to find the overlay opacity level that makes the text readable without hiding the image so much that it, too, is difficult to see. To complicate things a bit, we’ll use an image with both dark and light space and make sure the overlay takes that into account.
Our final result will be a value we can apply to the CSS opacity property of the overlay that gives us the right amount of transparency that makes the text 4.5 times lighter than the background.
Optimal overlay opacity: 0.521
To find the optimal overlay opacity we’ll go through four steps:
We’ll put the image in an HTML <canvas>, which will let us read the colors of each pixel in the image.
We’ll find the pixel in the image that has the least contrast with the text.
Next, we’ll prepare a color-mixing formula we can use to test different opacity levels on top of that pixel’s color.
Finally, we’ll adjust the opacity of our overlay until the text contrast hits the readability goal. And these won’t just be random guesses — we’ll use binary search techniques to make this process quick.
Let’s get started!
Step 1: Read image colors from the canvas
Canvas lets us “read” the colors contained in an image. To do that, we need to “draw” the image onto a <canvas> element and then use the canvas context (ctx) getImageData() method to produce a list of the image’s colors.
function getImagePixelColorsUsingCanvas(image, canvas) {   // The canvas's context (often abbreviated as ctx) is an object   // that contains a bunch of functions to control your canvas   const ctx = canvas.getContext('2d'); 
   // The width can be anything, so I picked 500 because it's large   // enough to catch details but small enough to keep the   // calculations quick.   canvas.width = 500; 
   // Make sure the canvas matches proportions of our image   canvas.height = (image.height / image.width) * canvas.width; 
   // Grab the image and canvas measurements so we can use them in the next step   const sourceImageCoordinates = [0, 0, image.width, image.height];   const destinationCanvasCoordinates = [0, 0, canvas.width, canvas.height]; 
   // Canvas's drawImage() works by mapping our image's measurements onto   // the canvas where we want to draw it   ctx.drawImage(     image,     ...sourceImageCoordinates,     ...destinationCanvasCoordinates   ); 
   // Remember that getImageData only works for same-origin or    // cross-origin-enabled images.   // https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image   const imagePixelColors = ctx.getImageData(...destinationCanvasCoordinates);   return imagePixelColors; }
The getImageData() method gives us a list of numbers representing the colors in each pixel. Each pixel is represented by four numbers: red, green, blue, and opacity (also called “alpha”). Knowing this, we can loop through the list of pixels and find whatever info we need. This will be useful in the next step.
Tumblr media
Step 2: Find the pixel with the least contrast
Before we do this, we need to know how to calculate contrast. We’ll write a function called getContrast() that takes in two colors and spits out a number representing the level of contrast between the two. The higher the number, the better the contrast for legibility.
When I started researching colors for this project, I was expecting to find a simple formula. It turned out there were multiple steps.
To calculate the contrast between two colors, we need to know their luminance levels, which is essentially the brightness (Stacie Arellano does a deep dive on luminance that’s worth checking out.)
Thanks to the W3C, we know the formula for calculating contrast using luminance:
const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);
Getting the luminance of a color means we have to convert the color from the regular 8-bit RGB value used on the web (where each color is 0-255) to what’s called linear RGB. The reason we need to do this is that brightness doesn’t increase evenly as colors change. We need to convert our colors into a format where the brightness does vary evenly with color changes. That allows us to properly calculate luminance. Again, the W3C is a help here:
const luminance = (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b));
But wait, there’s more! In order to convert 8-bit RGB (0 to 255) to linear RGB, we need to go through what’s called standard RGB (also called sRGB), which is on a scale from 0 to 1.
So the process goes: 
8-bit RGB → standard RGB  → linear RGB → luminance
And once we have the luminance of both colors we want to compare, we can plug in the luminance values to get the contrast between their respective colors.
// getContrast is the only function we need to interact with directly. // The rest of the functions are intermediate helper steps. function getContrast(color1, color2) {   const color1_luminance = getLuminance(color1);   const color2_luminance = getLuminance(color2);   const lighterColorLuminance = Math.max(color1_luminance, color2_luminance);   const darkerColorLuminance = Math.min(color1_luminance, color2_luminance);   const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);   return contrast; } 
 function getLuminance({r,g,b}) {   return (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b)); } function getLinearRGB(primaryColor_8bit) {   // First convert from 8-bit rbg (0-255) to standard RGB (0-1)   const primaryColor_sRGB = convert_8bit_RGB_to_standard_RGB(primaryColor_8bit); 
   // Then convert from sRGB to linear RGB so we can use it to calculate luminance   const primaryColor_RGB_linear = convert_standard_RGB_to_linear_RGB(primaryColor_sRGB);   return primaryColor_RGB_linear; } function convert_8bit_RGB_to_standard_RGB(primaryColor_8bit) {   return primaryColor_8bit / 255; } function convert_standard_RGB_to_linear_RGB(primaryColor_sRGB) {   const primaryColor_linear = primaryColor_sRGB < 0.03928 ?     primaryColor_sRGB/12.92 :     Math.pow((primaryColor_sRGB + 0.055) / 1.055, 2.4);   return primaryColor_linear; }
Now that we can calculate contrast, we’ll need to look at our image from the previous step and loop through each pixel, comparing the contrast between that pixel’s color and the foreground text color. As we loop through the image’s pixels, we’ll keep track of the worst (lowest) contrast so far, and when we reach the end of the loop, we’ll know the worst-contrast color in the image.
function getWorstContrastColorInImage(textColor, imagePixelColors) {   let worstContrastColorInImage;   let worstContrast = Infinity; // This guarantees we won't start too low   for (let i = 0; i < imagePixelColors.data.length; i += 4) {     let pixelColor = {       r: imagePixelColors.data[i],       g: imagePixelColors.data[i + 1],       b: imagePixelColors.data[i + 2],     };     let contrast = getContrast(textColor, pixelColor);     if(contrast < worstContrast) {       worstContrast = contrast;       worstContrastColorInImage = pixelColor;     }   }   return worstContrastColorInImage; }
Step 3: Prepare a color-mixing formula to test overlay opacity levels
Now that we know the worst-contrast color in our image, the next step is to establish how transparent the overlay should be and see how that changes the contrast with the text.
When I first implemented this, I used a separate canvas to mix colors and read the results. However, thanks to Ana Tudor’s article about transparency, I now know there’s a convenient formula to calculate the resulting color from mixing a base color with a transparent overlay.
For each color channel (red, green, and blue), we’d apply this formula to get the mixed color:
mixedColor = baseColor + (overlayColor - baseColor) * overlayOpacity
So, in code, that would look like this:
function mixColors(baseColor, overlayColor, overlayOpacity) {   const mixedColor = {     r: baseColor.r + (overlayColor.r - baseColor.r) * overlayOpacity,     g: baseColor.g + (overlayColor.g - baseColor.g) * overlayOpacity,     b: baseColor.b + (overlayColor.b - baseColor.b) * overlayOpacity,   }   return mixedColor; }
Now that we’re able to mix colors, we can test the contrast when the overlay opacity value is applied.
function getTextContrastWithImagePlusOverlay({textColor, overlayColor, imagePixelColor, overlayOpacity}) {   const colorOfImagePixelPlusOverlay = mixColors(imagePixelColor, overlayColor, overlayOpacity);   const contrast = getContrast(this.state.textColor, colorOfImagePixelPlusOverlay);   return contrast; }
With that, we have all the tools we need to find the optimal overlay opacity!
Step 4: Find the overlay opacity that hits our contrast goal
We can test an overlay’s opacity and see how that affects the contrast between the text and image. We’re going to try a bunch of different opacity levels until we find the contrast that hits our mark where the text is 4.5 times lighter than the background. That may sound crazy, but don’t worry; we’re not going to guess randomly. We’ll use a binary search, which is a process that lets us quickly narrow down the possible set of answers until we get a precise result.
Here’s how a binary search works:
Guess in the middle.
If the guess is too high, we eliminate the top half of the answers. Too low? We eliminate the bottom half instead.
Guess in the middle of that new range.
Repeat this process until we get a value.
I just so happen to have a tool to show how this works:
CodePen Embed Fallback
In this case, we’re trying to guess an opacity value that’s between 0 and 1. So, we’ll guess in the middle, test whether the resulting contrast is too high or too low, eliminate half the options, and guess again. If we limit the binary search to eight guesses, we’ll get a precise answer in a snap.
Before we start searching, we’ll need a way to check if an overlay is even necessary in the first place. There’s no point optimizing an overlay we don’t even need!
function isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast) {   const contrastWithoutOverlay = getContrast(textColor, worstContrastColorInImage);   return contrastWithoutOverlay < desiredContrast; }
Now we can use our binary search to look for the optimal overlay opacity:
function findOptimalOverlayOpacity(textColor, overlayColor, worstContrastColorInImage, desiredContrast) {   // If the contrast is already fine, we don't need the overlay,   // so we can skip the rest.   const isOverlayNecessary = isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast);   if (!isOverlayNecessary) {     return 0;   } 
   const opacityGuessRange = {     lowerBound: 0,     midpoint: 0.5,     upperBound: 1,   };   let numberOfGuesses = 0;   const maxGuesses = 8; 
   // If there's no solution, the opacity guesses will approach 1,   // so we can hold onto this as an upper limit to check for the no-solution case.   const opacityLimit = 0.99; 
   // This loop repeatedly narrows down our guesses until we get a result   while (numberOfGuesses < maxGuesses) {     numberOfGuesses++; 
     const currentGuess = opacityGuessRange.midpoint;     const contrastOfGuess = getTextContrastWithImagePlusOverlay({       textColor,       overlayColor,       imagePixelColor: worstContrastColorInImage,       overlayOpacity: currentGuess,     }); 
     const isGuessTooLow = contrastOfGuess < desiredContrast;     const isGuessTooHigh = contrastOfGuess > desiredContrast;     if (isGuessTooLow) {       opacityGuessRange.lowerBound = currentGuess;     }     else if (isGuessTooHigh) {       opacityGuessRange.upperBound = currentGuess;     } 
     const newMidpoint = ((opacityGuessRange.upperBound - opacityGuessRange.lowerBound) / 2) + opacityGuessRange.lowerBound;     opacityGuessRange.midpoint = newMidpoint;   } 
   const optimalOpacity = opacityGuessRange.midpoint;   const hasNoSolution = optimalOpacity > opacityLimit; 
   if (hasNoSolution) {     console.log('No solution'); // Handle the no-solution case however you'd like     return opacityLimit;   }   return optimalOpacity; }
With our experiment complete, we now know exactly how transparent our overlay needs to be to keep our text readable without hiding the background image too much.
We did it!
Improvements and limitations
The methods we’ve covered only work if the text color and the overlay color have enough contrast to begin with. For example, if you were to choose a text color that’s the same as your overlay, there won’t be an optimal solution unless the image doesn’t need an overlay at all.
In addition, even if the contrast is mathematically acceptable, that doesn’t always guarantee it’ll look great. This is especially true for dark text with a light overlay and a busy background image. Various parts of the image may distract from the text, making it difficult to read even when the contrast is numerically fine. That’s why the popular recommendation is to use light text on a dark background.
We also haven’t taken where the pixels are located into account or how many there are of each color. One drawback of that is that a pixel in the corner could possibly exert too much influence on the result. The benefit, however, is that we don’t have to worry about how the image’s colors are distributed or where the text is because, as long as we’ve handled where the least amount of contrast is, we’re safe everywhere else.
I learned a few things along the way
There are some things I walked away with after this experiment, and I’d like to share them with you:
Getting specific about a goal really helps! We started with a vague goal of wanting readable text on an image, and we ended up with a specific contrast level we could strive for.
It’s so important to be clear about the terms. For example, standard RGB wasn’t what I expected. I learned that what I thought of as “regular” RGB (0 to 255) is formally called 8-bit RGB. Also, I thought the “L” in the equations I researched meant “lightness,” but it actually means “luminance,” which is not to be confused with “luminosity.” Clearing up terms helps how we code as well as how we discuss the end result.
Complex doesn’t mean unsolvable. Problems that sound hard can be broken into smaller, more manageable pieces.
When you walk the path, you spot the shortcuts. For the common case of white text on a black transparent overlay, you’ll never need an opacity over 0.54 to achieve WCAG AA-level readability.
In summary…
You now have a way to make your text readable on a background image without sacrificing too much of the image. If you’ve gotten this far, I hope I’ve been able to give you a general idea of how it all works.
I originally started this project because I saw (and made) too many website banners where the text was tough to read against a background image or the background image was overly obscured by the overlay. I wanted to do something about it, and I wanted to give others a way to do the same. I wrote this article in hopes that you’d come away with a better understanding of readability on the web. I hope you’ve learned some neat canvas tricks too.
If you’ve done something interesting with readability or canvas, I’d love to hear about it in the comments!
The post Nailing the Perfect Contrast Between Light Text and a Background Image appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
Nailing the Perfect Contrast Between Light Text and a Background Image published first on https://deskbysnafu.tumblr.com/
0 notes
wallpaperpaintings · 5 years ago
Text
16 Simple (But Important) Things To Remember About Canvas | Canvas
16 Simple (But Important) Things To Remember About Canvas | Canvas – canvas | Encouraged to be able to my personal blog site, on this time I’m going to show you with regards to keyword. Now, here is the initial graphic:
The Program Canvas – A useful tool for designing programs – IPMA .. | canvas
Think about picture over? is usually that will wonderful???. if you believe so, I’l t show you several image all over again under:
So, if you like to obtain all these amazing pictures regarding (16 Simple (But Important) Things To Remember About Canvas | Canvas), click on save link to download the shots in your laptop. These are ready for download, if you appreciate and want to get it, simply click save logo in the page, and it’ll be immediately downloaded in your home computer.} As a final point if you like to grab unique and the latest graphic related to (16 Simple (But Important) Things To Remember About Canvas | Canvas), please follow us on google plus or save this page, we attempt our best to offer you daily update with fresh and new pics. Hope you enjoy staying right here. For some updates and latest news about (16 Simple (But Important) Things To Remember About Canvas | Canvas) shots, please kindly follow us on tweets, path, Instagram and google plus, or you mark this page on bookmark section, We attempt to offer you update regularly with fresh and new graphics, like your searching, and find the right for you.
Here you are at our website, articleabove (16 Simple (But Important) Things To Remember About Canvas | Canvas) published .  At this time we are excited to announce we have found an extremelyinteresting contentto be pointed out, namely (16 Simple (But Important) Things To Remember About Canvas | Canvas) Many people trying to find information about(16 Simple (But Important) Things To Remember About Canvas | Canvas) and definitely one of these is you, is not it?
Wall Art Canvas Painting Print Fine Pea Feathers .. | canvas
canvas> | HTML | WebReference – canvas | canvas
Horse ready to hang 3 panel wall art print mounted on MDF .. | canvas
The 16 deadly sins in designing your Business Model Canvas – North .. | canvas
Heart bedroom Romantic night Love SINGLE CANVAS WALL ART .. | canvas
Canvas photo – canvas | canvas
Abstract art by Osnat Tzadok | Art, Painting, Sun painting – canvas | canvas
Основы HTML16: Часть 16 | canvas
Francois Pascal Simon Baron Gerard Portrait Of Madame .. | canvas
Canvas – canvas | canvas
Updated Empathy Map Canvas | Карта на холсте, Мышление дизайн .. | canvas
Lean Canvas 📝 — шаблон и примеры заполнения модели – canvas | canvas
Штаны URBAN TACTICAL – Canvas – canvas | canvas
Canvas – canvas | canvas
Canvas Student – Додатки в Google Play – canvas | canvas
from WordPress https://www.bleumultimedia.com/16-simple-but-important-things-to-remember-about-canvas-canvas/
0 notes
holytheoristtastemaker · 5 years ago
Link
Hey folks! In this post I will show you how to access the device’s cameras on a web page, via JavaScript, with support for multiple browsers and without the need for external libraries.
Tumblr media
How to access the camera
To access the user’s camera (and/or microphone) we use the JavaScript MediaStream API. This API allows to access the video and audio captured by these devices through streams. The first step is to check if the browser supports this API:
if ( "mediaDevices" in navigator && "getUserMedia" in navigator.mediaDevices ) { // ok, browser supports it }
Support is decent in modern browsers (no Internet Explorer, of course).
Capturing the video stream
To capture the video stream generated by the camera, we use the getUserMedia method of the mediaDevices object. This method receives an object with the types of media we are requesting (video or audio) and some requirements. To start, we can just pass {video: true} to get the video from the camera.
const videoStream = await navigator.mediaDevices.getUserMedia({ video: true });
This call will ask the user for permission to access the camera. If the user denies it, it throws an exception and does not return the stream. So it must be done inside a try/catch block to handle this case.
Tumblr media
Note that it returns a Promise, so you have to use async/await or a then block.
Video requirements
We can improve the requirements of the video by passing information about the desired resolution and minimum and maximum limits:
const constraints = { video: { width: { min: 1280, ideal: 1920, max: 2560, }, height: { min: 720, ideal: 1080, max: 1440, }, }, }; const videoStream = await navigator.mediaDevices.getUserMedia(constraints);
That way the stream comes in the correct proportion of width and height. If it is a cell phone in portrait mode it takes care of inverting the dimensions.
Displaying the video on the page
Okay, now that we have the stream, what can we do with it? We can display the video on the page, in a video element:
// considering there is a // <video autoplay id="video"></video> // tag in the page const video = document.querySelector("#video"); const videoStream = await navigator.mediaDevices.getUserMedia(constraints); video.srcObject = videoStream;
Note the autoplay attribute in the video tag. Without it, you need to call video.play() to actually start displaying the image.
Accessing the phone’s front and rear cameras
By default getUserMedia will use the system’s default video recording device. In the case of a cell phone with two cameras, it uses the front camera. To access the rear camera, we must include facingMode: "environment" in the video requirements:
const constraints = { video: { width: { ... }, height: { ... }, facingMode: "environment" }, };
The default is facingMode: "user", which is the front camera. Be aware that, if you want to change the camera with the video already playing, you will need to stop the current stream before replacing it with the stream from the other camera:
videoStream.getTracks().forEach((track) => { track.stop(); });
Taking screenshots
Another cool thing you can do is capture images (screenshots) of the video. You can draw the current video frame on a canvas, for example:
// considering there is a // <canvas id="canvas"></canvas> // tag in the page const canvas = document.querySelector("#canvas"); canvas.width = video.videoWidth; canvas.height = video.videoHeight; canvas.getContext("2d").drawImage(video, 0, 0);
You can also display the canvas content in an img element. In the example I created for this tutorial, I added a button that creates images dynamically from the canvas and adds them to the page. Something like:
const img = document.createElement("img"); img.src = canvas.toDataURL("image/png"); screenshotsContainer.prepend(img);
0 notes
knoldus · 7 years ago
Link
Welcome All!!
In this blog, we are going to discuss about the CORS issue and how it has to be resolved while working with Lagom. So Let’s begin.
What is CORS?
CORS: Cross Origin Resource Sharing
Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources (e.g. fonts) on a web page to be requested from another domain outside the domain from which the first resource was served. By CORS, communications between the same domain will be allowed to users and the communications that are cross-originated will be restricted to a few techniques.
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts. For example, XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using those APIs can only request HTTP resources from the same origin the application was loaded from, unless the response from the other origin includes the right CORS headers. So in light words, It blocks the calls made by unknown domains and keeps the paths open only to the known domains. So the security is ensured despite the attacking requests.
What requests use CORS?
This cross-origin sharing standard is used to enable cross-site HTTP requests for:
Invocations of the XMLHttpRequest or Fetch APIs in a cross-site manner, as discussed above.
Web Fonts (for cross-domain font usage in @font-face within CSS), so that servers can deploy TrueType fonts that can only be cross-site loaded and used by web sites that are permitted to do so.
WebGL textures.
Images/video frames drawn to a canvas using drawImage.
Stylesheets (for CSSOM access).
Scripts (for unmuted exceptions).
This CORS implementation is sometimes a typical for the developer. But implementing it correctly removes it once and for all for the given application.
So now the question is how are we going to implement CORS in Lagom framework?
And the solution lies in just 4 steps given by Lagom developers:
Step 1: Include filters as a dependency on your -impl project. filters is a package provided by Play Framework.
com.typesafe.play filters-helpers_2.12 2.6.15
Step 2: Create a class that implements DefaultHttpFilters and inject Play’s CORSFilter
import play.filters.cors.CORSFilter; import play.http.DefaultHttpFilters;&amp;lt;/code&amp;gt; import javax.inject.Inject; public class Filters extends DefaultHttpFilters { @Inject public Filters(CORSFilter corsFilter) { super(corsFilter); } }
Step 3: Register that newly created class on your application.conf using:
play.http.filters = "com.fun.assignment.user.impl.Filters"
Step 4: Finally, add an ACL on your Service.Descriptor matching the OPTIONS method for the paths you are exposing on your Service Gateway.
@Override default Descriptor descriptor() { return named("user-api").withCalls( Service.restCall(Method.GET, "/users/api/get/users", this::getUsers), Service.restCall(Method.GET, "/users/api/get?id", this::getUser), Service.restCall(Method.POST, "/users/api/add", this::addUser), Service.restCall(Method.PUT, "/users/api/update?id", this::updateUser), Service.restCall(Method.DELETE, "/users/api/delete?id", this::deleteUsers) ).withAutoAcl(true).withServiceAcls( ServiceAcl.methodAndPath(Method.OPTIONS, "/users/api.*") ); }
Hope this blog would be helpful to you. For more doubts and examples regarding Lagom, feel free to go through our blogs, because we at Knoldus believe in gaining knowledge and growing our skills together.
References:
https://github.com/lagom/lagom-recipes/tree/master/cors/cors-java
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
0 notes
myquestionbank · 8 years ago
Text
HTML5 questions
Which method is used to draw an image on the canvas? drawImage(image,x,y) method is used to draw an image and video on the canvas. Where, image – It is used to show the image and video element to use x – This coordinate shows where to place the image on the canvas y – This coordinate shows where to place the image on the canvas Example: What are the various elements provided by HTML5 for media…
View On WordPress
0 notes
securitynewswire · 8 years ago
Text
Offbyone error in the DrawImage function in magick render c in GraphicsMa
SNPX.com : http://dlvr.it/PmTC2D
0 notes
alexmenar · 8 years ago
Video
youtube
(vía https://www.youtube.com/watch?v=iZ0hUNWadh0)Dentro de la etiqueta canvas no solo se generan elementos vectoriales sino también imágenes de mapa de bits. En este tutorial revisamos las diferentes opciones del metodo drawImage.
0 notes
101javascript · 8 years ago
Photo
Tumblr media
drawImage & Grid - Minesweeper JavaScript & Canvas Game Development Tutorial 2 - http://ift.tt/2nDMQcJ
0 notes
zer0dayeurope-blog · 8 years ago
Text
Low CVE-2016-2318: Debian Debian linux
Low CVE-2016-2318: Debian Debian linux
GraphicsMagick 1.3.23 allows remote attackers to cause a denial of service (NULL pointer dereference) via a crafted SVG file, related to the (1) DrawImage function in magick/render.c, (2) SVGStartElement function in coders/svg.c, and (3) TraceArcPath function in magick/render.c.GraphicsMagick 1.3.23 allows remote attackers to cause a denial of service (NULL pointer dereference) via a crafted SVG…
View On WordPress
0 notes