#MemoryLeaks
Explore tagged Tumblr posts
Text
Power of useEffect Hook in React: Essential Guide For Today’s Developers

The useEffect hook in React is a vital tool for managing side effects in functional components. Side effects include things like fetching data, updating the DOM, or setting up subscriptions. In traditional React class components, you’d use lifecycle methods like componentDidMount, componentDidUpdate, and componentWillUnmount for this, but with useEffect, it’s all in one place.
Basic Syntax of useEffect:
useEffect takes two arguments: A function: Contains the side effect logic.
An optional dependency array: Without a dependency array, the effect executes on every render, while an empty array limits it to run only on the initial render.
Cleanup with useEffect: You can also return a cleanup function from useEffect to clear things like timers or subscriptions when the component is about to unmount.
Effects with Dependencies: By adding dependencies (state or props) to the array, you can control when the effect is triggered. It will re-run whenever any value in the array changes.
Best Practices for useEffect: 1. Clean up side effects: Always clear up resources like subscriptions or timers to prevent memory leaks. 2. Manage dependencies: Carefully add dependencies to avoid unnecessary re-renders. 3. Use multiple effects: Split different logic into multiple useEffect hooks to keep your code cleaner.
Conclusion:- The useEffect hook simplifies how you handle side effects in React functional components. It’s flexible and makes managing tasks like data fetching or DOM manipulation much easier.
#React#useEffect#WebDevelopment#JavaScript#FrontendDevelopment#DataFetching#ReactHooks#CodingBestPractices#MemoryLeaks#DOMManipulation
0 notes
Text
Memory Leak Detection in Cloud Computing with RESIN
What is memory leak in Cloud Services?
Memory leaks are a recurring problem in the rapidly changing cloud computing environment, impacting stability, performance, and ultimately, user experience. Memory leak monitoring is therefore crucial to the calibre of cloud services. Memory leaks occur when accidentally allocated memory is not freed in a timely manner. It can result in the component’s performance degrading and even system crashes (OS). Even worse, it frequently slows down or even terminates other processes that are operating on the same system. Memory leak detection has been the subject of numerous studies and solutions due to the significant impact of these issues. Static and dynamic detection are the two types of traditional detection techniques. While the dynamic method discovers leaks by instrumenting a programme and tracking object references at runtime, static methods analyse software source code to infer possible leaks.
Nevertheless, these traditional methods of memory leak detection are insufficient to address leak detection requirements in cloud environments. Particularly for leaks resulting from cross-component contract breaches, which require significant domain knowledge to capture statically, the static techniques have limited scalability and accuracy. The dynamic techniques work better in cloud environments overall. They do, however, require a lot of equipment and are invasive. They also add a lot of runtime overhead, which is expensive for cloud services.
What is RESIN in Cloud Infrastructure ?
Microsoft Azure introducing RESIN, an end-to-end memory leak detection solution that is intended to address memory leaks in large cloud infrastructure in a comprehensive manner. Utilised in Microsoft Azure production environments, RESIN has proven to be an efficient leak detection tool with low overhead and good accuracy.
Workflow of the RESIN system
Numerous teams may hold hundreds of software components that make up a huge cloud infrastructure. Memory leak detection in Microsoft Azure was done by a single team before RESIN. For low overhead, high accuracy, and scalability, RESIN uses a centralised strategy, as seen in below Figure , to perform leak detection in many stages. This method doesn’t require a lot of instrumentation or re-compilation, nor does it require access to the source code of the components.Image credit to Micro soft Azure
Utilising monitoring agents, RESIN performs low-overhead monitoring to gather host-level memory telemetry data. A bucketization-pivot strategy is used in a remote service to aggregate and analyse data from various servers. RESIN initiates an analysis of the process instances within a bucket upon detection of leakage. RESIN does live heap snapshotting for highly suspect leaks and compares the results to regular heap snapshots stored in a reference database. In the end, RESIN automatically mitigates the leaking process by collecting numerous heap snapshots, running a diagnosis algorithm to identify the leak’s fundamental cause, and producing a diagnosis report to attach to the alert ticket and help developers with their further study. methods for detection
Memory leak detection presents particular difficulties in cloud infrastructure:
Memory leaks in production systems are typically fail-slow faults that could last days, weeks, or even months, and it can be challenging to capture gradual change over long periods of time in a timely manner.
At the scale of Azure global cloud, collecting fine-grained data over long periods of time is not practical.
Noisy memory usage caused by changing workload and interference in the environment results in high noise in detection using static threshold-based approach.
In order to overcome these difficulties, RESIN employs a two-level method for identifying memory leak symptoms: To find suspect components, a global bucket-based pivot analysis is used; to find leaky processes, a local individual process leak detection method is employed.
Read more about azure on LLM applications in Azure AI content safety new tools
We divide the raw memory consumption data into several buckets and convert the usage data into a summary of the number of hosts in each bucket using bucket-based pivot analysis at the component level. Furthermore, based on the deviations and host count in each bucket, a severity score is computed for each bucket. Every component’s bucket’s time-series data is used to detect anomalies. In addition to providing a robust representation of the workload trend with noise tolerance, the bucketization strategy lowers the computing burden associated with anomaly identification.
However, since many processes typically operate on a component, detection at the component level alone is insufficient for developers to conduct an effective leak investigation. RESIN performs a second-level detection scheme at the process granularity in order to focus the investigation’s scope when a leaky bucket is discovered at the component level. The process that is suspected of leaking, together with its start and end times and severity score, are output.
Evaluation of found leaks
When a memory leak is identified, RESIN captures a snapshot of the live heap, which includes all memory allocations that the programme that is now executing is referencing. It then examines the snapshots to determine the source of the leak. Actionable memory leak alerts result from this. Moreover, RESIN uses the snapshot feature of Windows heap management to carry out real-time profiling. But heap collection might be costly and interfere with the host’s performance. A few factors are taken into account when determining how snapshots are taken in order to reduce the overhead associated with heap collection.
The heap management only keeps a small amount of data in each snapshot, such as the size of each active allocation and the stack trace.
RESIN assigns a snapshotting priority to potential hosts according to the degree of leakage, noise level, and impact on customers. To guarantee a successful collection, the top three hosts on the suspected list are chosen by default. Resin employs a long-term, trigger-based approach to guarantee that the snapshots fully capture the leak. RESIN uses a pattern-based technique to determine the trace completion triggers and analyses memory growth patterns (such as steady, spike, or stair) to help in deciding when to terminate the trace collection.
To facilitate diagnosis, RESIS builds reference snapshots using a periodic fingerprinting procedure, which is compared with the snapshot of the suspected leaking process.
RESIN generates stack traces of the root by analysing the gathered snapshots.
Modification of detected leaks
RESIN makes an automated effort to address memory leaks in order to minimise any future effects on customers. A few different kinds of mitigation steps are made to lessen the issue, depending on the nature of the leak. RESIN selects a mitigation strategy that reduces the impact by using a rule-based decision tree.
Restarting the process or service is RESIN’s effort at the least intrusive solution if the memory leak is limited to a single Windows service or process. Rebooting the operating system can fix software memory leaks, but it is usually the last option because it is time-consuming and might create virtual machine outages. In order to minimise user effect, RESIN uses solutions like Project Tardigrade for non-empty hosts. This approach bypasses hardware initialization and only executes a kernel soft reboot following live virtual machine migration. Only when the soft reboot fails is a full OS reboot executed.
When the detection engine no longer believes that a target is leaking, RESIN ceases to perform mitigation actions to that target.
Impact and outcome of memory leak identification
Since its deployment in late 2018, RESIN has been utilised in Azure production to monitor millions of host nodes and hundreds of host processes on a daily basis. Using RESIN memory leak detection, we were able to attain an overall accuracy of 85% and a recall rate of 91%, even with the fast expanding cloud infrastructure under observation. he two most important indicators that RESIN provides amply illustrate its end-to-end benefits:
Unexpected reboots of virtual machines: the daily average of reboots per 100,000 hosts as a result of low memory.
The ratio of incorrect virtual machine allocation requests brought on by low memory is known as the virtual machine allocation error.
Virtual machine reboots decreased by almost 100 times and allocation error rates decreased by more than 30 times between September 2020 and December 2023. Furthermore, Azure host memory leaks have not resulted in any significant disruptions since 2020.
Read more on govindhtech.com
0 notes
Text


❝ looking half a corpse and half a 𝒈𝒐𝒅 . ❞

◌ SYSTEM BOOT: 404VANTHE.EXE
Name: Vanthe // Agender/Aboy
Pronouns: it/thing/he/stray/fang/fate/death/xe // orientation: Aroace + Fictogay
Kin type: Voidkin
Status: Always lurking
⚠ system note: this blog supports; Objectum and contradicting labels. don't like it? leave.
⚠ DNI: none! I’ll just block weirdos

◌ CONNECTION: BYI
… I reclaim slurs/fetish terms (I'll try to keep it to a minimum)
… If you have the same F/O as me then def interact! I love when others love my boyfriends as much as I do! (cue Communist Bugs Bunny meme; Our F/O)
… I love love love horror and gore (only the fake kind) and my genders may show that. … this is a primarily SFW blog. I will not be posting any NSFW content in terms of 18+ activities but I do cuss and I am 18 so be aware

◌ Accessing files: Anons
loading ~ Error; Files empty

◌ CONNECTION: F/O's
Wayne McCullough // 2019 - 2025
Rory Peters // 2024 - 2025

TAGS :
一 self.exe !! <- talking/other
一 Love404 !! <- Fictorose posts
一 Memoryleak !! <- MAD awareness
一 Reblogs.Void !! <- Anything Rebloged
一 Kin.Signal !! <- Anything Otherkin
一 Xeno.input !! <- Xenogenderposts
一 EncryptedLove !! <- General queer posting
一 Vanthe's.den !! <- Asks and Annons
一 VantheCoins !! <- Coining terms
一 Lovefiles !! <- F/O stuff (moodboards, Pride pfps and Headcannons)
#Self.exe#Love.404#Memoryleak#Reblogs.void#Kin.Signal#Xeno.input#EncryptedLove#Vanthe's.den#VantheCoins#alterhuman#Lovefiles#fictoromantic#fictorose#queer#xenogender
3 notes
·
View notes
Text
[They met in college. They dropped out together, they fell in love together. They made weird games out of their shared studio apartment. They were Starling Games.]
.MemoryLeak
#enjoy the only happy piece of art i've made of these three so far#.memoryleak#memory leak#horror#my art#my stories#my books#psychological horror#throuple#Eva#Adrian#Adrian Starling#Casey
17 notes
·
View notes
Text
Enhance Your Software Performance with MemProfiler
In the world of software development, optimizing performance is a top priority. One crucial aspect that can significantly impact performance is memory management. To help developers identify and resolve memory-related issues, MemProfiler offers a powerful solution. In this blog post, we will delve into the advantages, features, pricing, and provide screenshots to showcase why MemProfiler is the…
View On WordPress
#MemoryManagement#MemoryProfiler#MemoryProfiler PerformanceOptimization MemoryManagement CodeOptimization SoftwarePerformance MemoryLeaks GarbageCollector MemoryUsag
0 notes
Text
why hello there!
i'm memoryleak. part-time gamer, full-time idiot.
i do a... lot of stuff, including but not limited to art, music and sometimes programming if i feel like it.
some of my interests are sonic the hedgehog, vocaloid stuff, art in general and other undefined thingamajigs
wanna hang here? cool. don't step on the glass.
- my art ‘n stuff
1 note
·
View note
Text
You guys should check out this song by memoryleak on YouTube ^^
youtube
0 notes
Text
Rebooting Your Devices Can Fix Most Common Problems (@Lifewire)

Did you ever wonder why do IT Support Teams usually ask if you’ve already tried rebooting your PC – before anything else?
Some people seem to take it as an offence, because it looks like a too simple solution to be true or effective. But in reality, most of the common problems in your Devices are generally fixed with a simple and fast Reboot – and that’s not limited only to Windows Computers!
In this very informative article on Lifewire.com, Tim Fisher shows us all reasons why! Check it out:
➤ https://bit.ly/2QuAUww
0 notes
Audio
It sure has been long since I’ve released a new mixtape! To be precise, two years have passed since I was invited to contribute with a one-hour mixtape for 24hourmusic.org and, a few months earlier, my very first mixtape attempt as Memory Leak.
Well, I’m glad to announce that I have something kinda fresh to share now.
I’ve been progressively working in studio to recapture a short DJ set I threw together back in June 1st, 2019, at 49 da ZDB (Lisbon, Portugal). I decided to do this because people seemed to have really enjoyed that session and unfortunately there wasn’t a live recording. Also, Dinis Correia seductively insisted on having me doing this.
Anyway, here it is, Memory Leak - Yet Another Mixtape just got out last week. If SoundCloud is not your playground, Memory Leak - Yet Another Mixtape is also available on Mixcloud. You’ll find extra details in the description on both aforementioned services.
Enjoy and let me know what you think!
Tracklist
Strafe - Set It Off (Justin Martin Remix) DIRTYBIRD, 2018
Volac - Walk Around Bunny Tiger, 2017
Frey - Milkshake Bunny Tiger, 2017
Tiesto & Sevenn - BOOM (Extended Mix) Universal Music, 2017
Golf Clap & Dillon Nathaniel - Sick (Extended Mix) This Ain't Bristol, 2018
Duke Dumont - Red Light Green Light (Extended Mix) Blasé Boys Club / Virgin EMI Records, 2019
Justin Martin & Ardalan - Hail Mary Trippy Ass Technologies, 2019
Proper Villains - I Get Down (E.R.N.E.S.T.O Remix) Tons & Tons, 2018
Proxy & Volac - Supa Low This Ain't Bristol, 2018
Walker & Royce feat. Sophiegrophy - All For the Gram Black Book Selects, 2018
illusionize - This Is My Flow DIRTYBIRD, 2019
Mancodex - Silicon Valley Medium Rare Recordings, 2018
The Sponges - Space Funk '75 (Extended Mix) Box Of Cats, 2018
Kzn - Talking This Ain't Bristol, 2018
Bart B More & Das Kapital - Hit the Club (Extended Mix) This Ain't Bristol, 2018
Purple Disco Machine - Body Funk (Dom Dolla Extended Mix) Club Sweat, 2019
GAWP - Totem Recall HotBOi Records, 2018
Dalfie - Can't Think Right Now Pets Recordings, 2019
Justin Jay - Time (Walker & Royce Remix) Fantastic Voyage, 2018
10 notes
·
View notes
Text
Keep your distance
I wrote a Science Fiction thriller novel a while back and sent it out to agents. The response was a resounding silence. So last year I paid to have an agent review my submission pack and provide feedback. It was a fascinating experience and I learned a lot from the discussion, and vowed to approach the novel with the new insight I’d gained. And that is what I’ve just done; starting off by reading the novel in one go.
It’s a reasonably complex story, but that doesn’t excuse the number of plot holes I found. Some are benign and easy to resolve, but others are more substantial. My “favourite” has to be the two tough guys who chase the protagonist through the first third of the book and then… disappear. They simply aren’t mentioned again. In second place is the character who was sending messages 10 hours after we later discover they’re dead. It’s not that kinda book!
Remember, this was a book I believed was ready to go out to agents and publishers. At that stage I’m usually removing the commas I added in the previous draft.
How the hell did that happen? I have 52 separate editing notes for the next draft. How did I miss so many problems?
So here’s my excuse, for what it’s worth. Writing a novel is hard. You are working on so many levels at the same time that you effectively have to hold 95,000 words in your head at once. And this coming from someone who writes down the pizza order before he leaves the house.
You’re dealing with the macro and the micro, following big plot strands while checking for typos and clunky sentences. Checking logic, following characters’ arcs. There’s a lot going on. Normally I work through a number of different drafts which look at different elements of the novel, and I guess given the complexity of this story I probably needed another two or three runs at this novel before it was truly ready to go out into the big, bad world.
Maybe I was too close to the text by the end of my last draft. I thought I knew the book well enough, but I needed a little distance to really spot the flaws. The challenge is that by the time I finish this round of edits I’ll be back in that same place: confident that I have spotted each issue, that the world logic fits together, that characters act consistently and realistically. But the first thing on my list, sort out those two guys sitting in the van.
0 notes
Text
Firefox memory leak & How to fix Memory leaks in Firefox
#Firefox $memory $leak & How to fix Memory leaks in Firefox #memoryleak #memoryleaks #guide
I hope you will totally agree with me when I say that, Just like Chrome, Mozilla Firefox is a memory hog that is constantly draining your RAM without you even realizing There is no doubt that Google Chrome is notorious for memory leaks. The longer you run any program on it, the more RAM it consumes, ultimately slowing the system. However, when it comes to browsers such as Firefox, users are…
View On WordPress
0 notes
Text
[Adrian used to make games for fun. He used to hide secrets in his games. There used to be a story that connected all his little worlds and characters.
But something changed.
Something happened to him.]
.MemoryLeak
15 notes
·
View notes
Text
Perbedaan Sederhana dari MVC, MVP, MVVM dan MVI
Kita pasti sudah tidak asing dengan architecture design pattern yang saya sebutkan di judul (MVC, MVP, MVVM dan MVI). Semuanya mengajarkan mengenai bagaimana sebaiknya kita memisahkan code base kita agar mencapai konsep single responsibility (S) dari SOLID principle. Tentu saja penggunaan pattern tidak hanya terbatas disitu, tapi hal tersebut adalah yang paling jelas terlihat. Karena yang semulanya semua fungsionalitas berada dalam 1 objek, kini harus dipisahkan menjadi setidaknya 3 objek lain Model, View, dan Controller/Presenter/ViewModel/Intent.
Walaupun pattern ini sangat populer dan banyak sekali artikel yang membahasnya, tapi masih banyak yang kesulitan menjelaskan perbedaan antara ke-empatnya. Oleh karena itu saya akan mencoba menjelaskan hal tersebut melalui tulisan kali ini.
Model dan View
Kedua objek ini terdapat di semua pattern dan saya rasa sudah sangat jelas, sehingga tidak perlu saya ulangi penjelasannya... Tapi, demi kelengkapan artikel, mari saya coba jelaskan sekali lagi (alibi aja, emang mau ditulis 😜).
Model adalah objek yang merepresentasikan data yang akan ditampilkan oleh view. Sedangkan View adalah objek yang merepresentasikan apa saja interaksi yang dapat diterima/dilakukan oleh user dari/kepada sistem. View tidak melulu terkait tampilan yang dilihat di layar, untuk sistem yang lebih general, view bisa berupa daftar fungsi yang menerima input ataupun memberikan output.
Sebagai contoh, misal pada sistem pengaksesan data dari database. POJO (Plain Old Java Object) adalah Model, Database Driver (seperti Room, SqlHelper, ORM, dll) adalah Controller, dan kelas DatabaseHelper yang kalian buat untuk membungkus controller adalah View. Untuk lebih jelas silahkan lihat potongan code dibawah.
/** * Profile adalah Model * SqlHelper adalah Controller * DatabaseHelper adalah View */ class DatabaseHelper { val database = SqlHelper("mysql://192.168.0.10", "root", "root") fun openDatabase() { database.open() } fun closeDatabase() { database.close() } fun getUserProfile(id: String): Profile { val rawData = database.query(id) return parse(rawData) } }
Sekilas akan timbul pertanyaan kenapa perlu ada DatabaseHelper? Kan bisa langsung ke SqlHelper. Peryataan yang tepat, hal itu karena kita sedang menginspeksi scope database access saja. Code diatas memungkinkan kita mengganti ganti database tanpa perlu user dari DatabaseHelper mengetahuinya. Untuk scope yang lebih besar DatabaseHelper bisa dikategorikan sebagai controller, yang mana user sistem tersebut tidak perlu tau jika sewaktu waktu DatabaseHelper diganti menjadi ApiHelper.
Kalau kalian mencoba mengganti nama DatabaseHelper menjadi DatabaseView akan terdengar sangat rancu. Terlebih, karena objek DatabaseHelper hampir selalu digunakan oleh sistem yag lebih besar. Oleh karena itu pattern MVC sangat jarang, bahkan tidak pernah, digunakan untuk menggambarkan sistem selain yang memiliki tampilan.
Perbedaan Controller, Presenter, ViewModel, dan Intent.
Controller
Pada MVC, View merupakan "bos besar" dalam sistem ia pemilik dari 2 objek lainnya. Ibaratkan dalam sebuah kantor seorang bos mempekerjakan seorang OB. Bos tersebut menyuruh OB untuk membuatkan kopi. Lantas ia pergi ke toko membeli kopi bubuk, lalu kembali ke kantor, membuat kopi, lalu mengantarkannya kepada si bos. Si OB, "uang" dan kopi tersebut ya "milik" si bos (walaupun tentu saja OB bisa memiliki resource dia sendiri, misal sepeda motor). Dalam code digambarkan sebagai berikut.
/** * Money dan Coffee adalah Model * ShopController adalah Si OB "Controller" * ConsumerView adalah Si Bos "View" */ class CustomerView { var coffee = EmptyCoffeeCup() val shopper = ShopController() // si OB val person = PersonController() // otak si Bos fun onDrinkCoffeeClick() { val money = person.getMoney(15000) coffee = shopper.makeCoffee(money) person.consume(coffee) // coffee.volume -= 5 (minum dikit2) } }
Bisa dilihat dari code diatas sebuah view dapat memiliki banyak model dan banyak controller. Seperti layaknya seorang "bos". Permasalahan yang muncul adalah sistem yang tidak responsif. Hal ini karena MVC berfokus pada syncronous proses, menunggu adalah suatu hal yang wajar. Potongan code diatas akan membuat proses menunggu setiap eksekusi sampai selesai sebelum melanjutkan ke eksekusi berikutnya. Hal tersebut membuat tampilan seolah "freeze", atau tidak bergerak ketika sebuah eksekusi membutuhkan waktu yang lama. Hal ini dapat diakali dengan diperkenalkannya sistem callback dan thread. Potongan code tersebut dapat diubah menjadi seperti dibawah.
fun onDrinkCoffeeClick() { val money = person.getMoney(15000) shopper.makeCoffee( money, onCoffeeReady: (coffee) { person.consume(coffee) // coffee.volume -= 5 (minum dikit2) } ) }
Tapi bagaimana kalau ternyata getMoney juga lama? Karena person perlu ngambil uang dulu ke bank. Dan juga bagaimana kalau consume coffee juga lama? Karena dia menunggu dingin dulu? Selamat datang di Callback Hell 😈.
fun onDrinkCoffeeClick() { person.getMoney( 15000, onMoneyReady: (money) { shopper.makeCoffee( money, onCoffeeReady: (coffee) { person.consume( coffee, onCold: () { coffee.volume -= 5 } ) } ) } ) }
Presenter
Pada MVP, View dan Presenter memiliki kedudukan yang hampir setara. Mereka bisa saling menyuruh. Hanya saja, resource "model" ialah milik presenter yang mana ia yang memutuskan akan memberikan model apa kepada View. Relasi keduanya lebih kepada badan terhadap otak. Badan mengirim impuls ke otak, otak memprosesnya lalu mengirim balik impuls kepada otot badan. Contoh code diatas kini dapat diubah menjadi seperti berikut.
/** * Money dan Coffee adalah Model * CustomerPresenter adalah Otak "Presenter" * ConsumerView adalah Badan "View" */ class CustomerView { val presenter = CustomerPresenter() fun onStart() { presenter.start(this) } fun onDestroy() { presenter.stop() } fun onDrinkCoffeeClick() { presnter.drinkCoffee() } fun showDrinkAnimation() { ThreadUtils.scheduleMainThread { // open mouth, pick cup, etc.. } } } class CustomerPresenter() { lateinit var view: CustomerView? var coffee = EmptyCoffeeCup() val wallet = Wallet.get() val coffeeShop = ShopRepository() // coffee seller fun start(view: CustomerView) { this.view = view } fun stop() { this.view = null } fun makeCoffee() { val money = wallet.getMoney(15000) coffee = coffeeShop.buy(money) } fun drinkCoffee() { ThreadUtils.start { if (coffee is EmptyCoffeeCup) { makeCoffee() } coffee.volume -= 5 view?.showDrinkAnimation() } } }
Bisa dilihat diatas View dan Presenter saling memiliki reference terhadap satu sama lain. Sehingga bisa saling "menyuruh" yang mana walaupun sama-sama memanfaatkan threading tapi MVP menyelamatkan dari callback hell.
Namun masalah lain dapat timbul dari MVP yaitu MemoryLeak. MemoryLeak terjadi ketika view sudah berhenti digunakan namun presenter.stop() lupa dipanggil atau tidak terpanggil. Circular reference antara View dan Presenter akan membuat Garbage Collector (GC) tidak dapat meng-collect memori dari kedua object tersebut karena reference count nya masih belum 0. Hal ini dapat di hindari dengan menggunakan WeakReference (mengizinkan GC untuk melakukan pemutusan reference) terhadap view di presenter. Menjadi seperti berikut.
class CustomerPresenter() { lateinit var weakView: WeakReference ... fun start(view: CustomerView) { this.weakView = WeakReference(view) } ... fun drinkCoffee() { ... weakView.get()?.showDrinkAnimation() ... } }
ViewModel
Pada MVVM, VM berusaha menggabungkan kelebihan C dan P. VM tidak memiliki reference kepada View dan View tidak perlu menunggu proses pada VM tanpa terjebak ke callback hell. VM mempergunakan Observer Pattern kepada Model, dimana View melakukan observe (pengamatan terus menerus) kepada model sehingga jika terjadi perubahan maka ia akan tau. Relasi View kepada VM bisa diibaratkan seperti sutradara dan aktor. Sutradara bisa memerintah aktor melakukan sesuatu, tapi tidak sebaliknya. Sementara aktor menjalankan peran, sutradara terus menerus memperhatikan segala macam gerak dan membuat penyesuaian terhadap penempatan kamera. Mari kini kita coba refactor code sebelumnya.
/** * Money dan Coffee adalah Model * CustomerViewModel adalah Aktor "ViewModel" * CustomerView adalah Sutradara "View" */ class CustomerView { val vm = CustomerViewModel() fun onStart() { registerModel(vm) vm.scene.observeOnMainThread { currentScene -> if (currentScene == "drinking") { showDrinkAnimation() } } } fun onDrinkCoffeeClick() { vm.drinkCoffee() } fun showDrinkAnimation() { // open mouth, pick cup, etc.. } } class CustomerViewModel() { val scene = Observable() var coffee = EmptyCoffeeCup() val wallet = Wallet.get() val coffeeShop = ShopRepository() // coffee seller fun makeCoffee() { val money = wallet.getMoney(15000) coffee = coffeeShop.buy(money) } fun drinkCoffee() { ThreadUtils.start { if (coffee is EmptyCoffeeCup) { makeCoffee() } coffee.volume -= 5 scene.update("drinking") } } }
Bisa dilihat diatas tidak ada circular reference dan tidak ada callback hell. Akhirnya.. semuanya aman. Oh tidak secepat itu ferguso 🤭, "terus menurus" dalam code itu perlu di-define. Jika terus menerus itu artinya mengschedule tiap n detik atau n milidetik. Maka sistem akan tidak responsif (ketika perubahan terjadi maka update perlu menunggu schedule berikutnya) dan juga akan boros computing resource dari mesin. Karena akan banyak pengecekan yang tidak menemukan perubahan.
Oleh karena itu sebenarnya observable itu memegang reference kepada observernya. Dimana ketika value-nya berubah maka ia akan memberitahu kepada semua observernya, yang mana berarti potensi memory leak. Observable bisa menghindari Memory leak dengan menerapkan reference berupa WeakReference (lihat section sebelumnya) atau menerapkan mekanisme yang menjamin pemutusan reference ketika view akan di destroy, seperti yang dilakukan oleh LiveData di Android. Oleh karena itu MVVM dipopulerkan oleh kemunculan Android Architecture Component.
Intent
Last but not least, Intent pada MVI. Konsep MVI sangat mirip dengan MVVM dimana bergantung kepada adanya Model yang di observe. Bedanya ialah pada Intent sebuah action jika melakukan perubahan kepada sebuah model dianggap merubah keseluruhan model. Sehingga akan mentriger semua observer atau istilah lainnya akan melakukan full render. Hal tersebut tentu saja akan boros computing resource tapi hal ini seringkali preferable demi mencapai kondisi single source of truth. Sehingga menjamin tidak ada perubahan yang tidak terefleksi kepada tampilan. Misalnya contoh berikut.
var height = 172 var weight = 78 val bmiObservable = Observable(height/weight) bmiObservable.observe { bmi -> setText(bmi) } ... fun onUpdateHeightTextField(t: String) { height = t.toDouble() }
Ketika user memanggil fungsi onUpdateHeightTextField maka nilai bmiObservable tidak akan mentriger observer untuk melakukan setText. Sehingga solusi untuk permasalah diatas ialah membuat semua data menjadi observable seperti berikut.
val height = Observable(172) val weight = Observable(78) val bmiObservable = Observable(height/weight) height.observe { h -> bmiObservable.update(h/weight.value) } weight.observe { w -> bmiObservable.update(height.value/w) } ... bmiObservable.observe { bmi -> setText(bmi) } ... fun onUpdateHeightTextField(t: String) { height.update(t.toDouble()) }
Tapi solusi diatas membuat kita perlu mengganti semua data menjadi observable dan menambah banyak observer bantuan untuk sekedar mengupdate observer utama. Oleh karena itu dibuatlah solusi lebih elegan seperti berikut.
/** * State adalah Model */ class State ( val height: Double, val weight: Double ) class Intent { val state = Observable(State()) fun updateHeight(t: Double) { val newState = state.copyWith(height: t) state.update(newState) } } class View { val intent = Intent() fun onStart() { intent.state.observeOnMainThread { newState -> render(newState) } } fun render(state: State) { // do all view update base on state } fun onUpdateHeightTextField(t: String) { intent.updateHeight(t.toDouble()) } }
jika tidak menyukai immutable data, bisa juga menjadi..
class State ( var height: Double, var weight: Double ) class Intent { val state = Observable(State()) fun onUpdateHeightTextField(t: String) { state.value.height = t.toDouble() render() } fun render() { state.update(state.value) } }
solusi penulisan yang kedua membuat kita memiliki kontrol kapan ingin melakukan render. Sehingga bisa membatasi render hanya ketika diperlukan namun tetap reflect terhadap semua data.
Perlu diperhatikan juga, baik MVP, MVVM dan MVI juga tidak membatasi jumlah P, VM maupun I nya. Walaupun sering kali hanya ada 1.
Kesimpulan
Semua pattern memiliki kelebihan dan kekurangan masing-masing. Pemilihan pattern tergantung sesuai preferensi setiap coder, or at least preferensi kantor tempat coder bekerja 😂. Secara pribadi saya lebih menyukai MVI karena secara penulisan lebih sederhana dan menjamin tidak ada data yang tidak reflect walaupun dengan cost komputasi yang lebih besar. Namun kebetulan saya bekerja membuat aplikasi yang mana kemampuan komputasi walapun limited, semakin lama semakin bertambah cepat (Android dan iOS) sehingga beban komputasi yang tinggi tidak lagi signifikan. Walaupun dalam kondisi tertentu, saya bisa mengkombinasi penggunaan partialRender jika butuh melakukan optimasi.
Cukup sekian artikel kali ini. Semoga bermanfaat. Sampai ketemu di artikel berikutnya. Ciao~~
0 notes
Text
JVM(Java Virtual Machine) consumindo muita memória virtual
JVM(Java Virtual Machine) consumindo muita memória virtual
Na maioria das vezes, quando analisamos um consumo de memória da JVM(Java Virtual Machine), acabamos prestando atenção apenas na memória física(RAM) e esquecemos que existe também o consumo de memória virtual que é usado pelo Sistema Operacional para auxiliar na economia e na sobrecarga da memória física.
Estes dias eu peguei um problema a respeito de um processo javaw.exe (JDK 1.7), que consumia…
View On WordPress
0 notes
Text
Memory-leak
I just realized a rather interesting memoryleak in my program.
Input (keyboard, mouse) is centralized by a KeyTracker and a MouseTracker, both of which exists only one. The input are wired into these Trackers as soon as the window is created.
When a certain object (a listener) wants to receive direct input, It registers itself at the Tracker. Whenever there is input, the Tracker will tell its listeners.
The camera is able to switch between two stances, and this switching is done by calling the camera-object that implements the desired stance. One camera-object is for instance controlled by the mouse (and there is where the problem starts).
In Java, destroying an object is done by forgetting about it, you basically replace it and the garbage collector will eventually clean it up. However, the Tracker will always keep a reference to each of its listeners, and thus the camera is never forgotten/cleaned. Result: switching the camera often will pile up obsolete camera’s in memory.
A solution would be to let registering and unregistering be dealt with by the caller of the cameras (and all other listeners) -> this adds complexity. Other option is using a “weak reference” in the Tracker, but i doubt that is good practice.
Other option, the one I aim at, is making a Listener manager, that restricts the usage of every listener, such that a memory leak is no longer possible.
0 notes