#Altik-Reference
Explore tagged Tumblr posts
Text
Lessons learned from browsing computer parts
1) AMD has dropped out of contention with Intel, focusing more on its APUs rather than high-end CPUs. The bottom line: if you're building a high-end machine, expect an Intel chip to be in it, at least for the foreseeable future.
2) Intel has recently released their new Ivy Bridge CPUs, which are smaller and nicer for various reasons (as you'd expect from an upgrade), but they overclock hotter, so you have to keep that in mind when planning a build.
3) Intel CPUs with model numbers ending in K are unlocked for overclocking - hence those are the ones you want if you're overclocking. If you're not, save $20 and get the non-K version.
4) The most important things to check on a MOBO:
Chipset and Socket compatibility with your CPU. This is usually pretty easy to find in the board's spec sheets, but going to the manufacturer's site and checking compatibility with your specific CPU is always a Good Idea.
Overclockability. This is trickier - mostly relies on testimonials, it seems. Do some hunting around Google to see what people say about its performance.
Types and quantity of ports. Obviously you'll want at least enough PCIe slots for a GPU and any other things you're including, but making sure they're PCI 3.0 and that the SATA connections are 6Gbps are also good to check.
Arrangement of stuff. Make sure you'll be able to fit your GPU, RAM, and aftermarket heat sink without any conflicts. This is a common problem, but if you look at reviews, it's usually pretty easy to get an idea what will fit and what won't.
Make sure that the card is compatible with CrossFire / SLI depending on what you might upgrade to in the future (or plan on doing now). For those out of the know (including, probably, my future self): CrossFire compatible means the board can handle more than one AMD GPU, and SLI compatible means the board can handle more than one NVidia GPU.
5) The GPU will probably be your biggest money sink, especially if you're doubling up.
6) PSUs are confusing, so just get one that has a decent reputation, will fit in your case, and has enough wattage to handle your stuff with a little leeway (but not too much or you might fry your computer). At least, that's my recommendation to myself until I get a better grasp on them. :\
7) Important factors in a case:
Price - Personally I'm not big on paying premiums for aesthetics for a case, but some people are so you'll see some expensive cases just because they have light up fans and shit like that.
Good ventilation - Unless you're doing liquid cooling (which seems insane to me, at least with my level of expertise), the case will need good ventilation channels for keeping everything cool. Nice cases will come with a few fans to help you out, as well as several places to add extra fans if your computer is running hot.
Dust protection - With good ventilation comes several locations for dust to enter the machine. Having dust filters is a very nice bonus - especially if they are easy to clean.
Cable management - Having wires everywhere is bad for your build (for various reasons), so having built-in channels to feed cables through is very nice and a Good Thing.
Space - It's key that your various components can actually fit in the case. The biggest contenders will be your MOBO and the GPU: the former is easy to check, because the board will be listed as ATX or micro-ATX on its form factor, and you can compare this against the case. The GPU is trickier: just look at what other people say about it, and deduce from research. It's a pain, but it matters.
8) RAM is RAM, though some RAM is a bit better if you're beingreallypicky:
RAM is labeled with a clockrate - higher is better, though above 1600 makes very little difference from what I can tell.
Delay times are also usually provided - something along the lines of 8-8-8-24. Smaller numbers are better - these are the number of clock cycles necessary to access RAM in various ways. Expect bigger numbers on higher clockrates it seems.
Some RAM producers make giant ass heat sink things to put on their sticks, which often just get in the way. You might shy away from the bigger ones if you're afraid it will conflict with other parts on the board.
9) Don't get more computer than you need! I really fell for this on my first build, and wasted a bunch of time because of it: TL;DR, there are a lot of fancy parts out there that look nice, but are often newer than necessary, and you'll end up paying a price premium for very little advantage. Just do a bit of research on exactly how much you'll need out of your computer for what you plan on doing, and you'll get a better idea of what sort of power you'll need from your parts.
...wow, lots of text. Sorry to spam your dashes guys! I just want to keep this stuff on hand for when I upgrade in the future. :3
7 notes
·
View notes
Text
NDS Development: Some info on Sprites
So, I'm going to make this post with the assumption that the readers know how tile based graphics work in a rough sense - in particular, I'm coming from a background of working on the GBA and using tile graphics there, so if you're completely out of the know, go read Tonc's pages on this first.
To set up Sprites on the NDS, you've got to initialize a lot more stuff than with bitmap modes. libnds makes this kind of easy and kind of hard, depending on your perspective: it seems to me that they pretty much assume you'll use just their system for getting everything set up, so you have a lot of distinct steps in the process that end up getting merged together in their library.
What I mean is that there are a lot of situations like this: just like on the GBA, the NDS uses a DISPCNT register to control video modes and various parameters for them. If you were working with tonclib or making your own system to interface with the registers, what you'd expect to do would be to initialize this register first with all the parameters you expect, then proceed to set up the OAM and VRAM in their own initialization functions. libnds, on the other hand, sets up DISPCNT while you call various other functions - ie, when you call the built in OAM setup function, this is when the DISPCNT register is set to allow you to use sprites.
Like I said, this is good or bad depending on your perspective. The good thing about it is that it'll be easier to control exactly what bit-depth you want and when, and it abstracts things away a bit, making it easier to just make function calls and not have to think about the hardware. On the other hand, the documentation for libnds is pretty poor, and most tutorials are grossly out of date, so trying to figure out exactly what you're supposed to do is a bit tricky without doing some pretty hardcore research yourself. Plus, since everything is so entwined into their various functions, you'll pretty much be stuck using whatever functionality they've pre-programmed for you - no building your own OAM Manager to optimize some particular feature, or get a sense of how it works yourself; not unless you're willing to rewrite the entire graphics portion of the library.
So what I'm going to do for you is run down what you should do (as of libnds 1.7.3) to get simple sprites set up and display on the screen:
1) Make your sprite, and export the data to .c code. You can do this with Usenti or Grit, both applications written by Cearn (the guy who wrote Tonc). Personally I prefer Usenti because it has an easier to use palette manager and an actual interface to draw a sprite itself, but Grit seems to be more standard in the NDS community.
2) In main, do the following function calls in order:
videoSetMode(VIDEO_MODE) - this sets the first few bits of DISPCNT indicating which video mode the graphics card should be using. HOWEVER, it doesn't set anything else.
vramSetBankX(VRAM_X_MODE) - this sets whichever data is necessary to set what the VRAM X bank will be used for. Generally, you'll use VRAM_A and VRAM_B for the main engine and VRAM_C and VRAM_D for the sub engine (specifically, VRAM_C for the sub screen background, and VRAM_D for sprites - they aren't applicable the other way around), but not necessarily. Look into the hardware specs to see exactly what each bank can be used for and how to access them if you're interested.
oamInit(&oamMain / &oamSub, SpriteMapping_1D_BOUNDARY, usingExtPalette) - this initializes the OAM managers that are built into the library. Intuitively, oamMain is for the main engine and oamSub is for the sub engine. The second parameter indicates how you want sprites mapped in memory; more on this later. The third indicates whether you are using extended palettes - I haven't used these, so I don't know how they work, but generally I'd think you probably won't use them.
u16* gfx = oamAllocateGfx(&oamMain / &oamSub, SPRITE_SIZE, SPRITE_COLOR_FORMAT) - this function tells the OAM manager to allocate a sprite and some VRAM memory to but data into it and set up. SPRITE_SIZE refers to how large the sprite can be: there are several options, but the gist of it is that you have between 8x8 and 64x64 pixel sprites composed of 8x8 tiles. SPRITE_COLOR_FORMAT indicates whether you are using bitmap, 256, or 16 color modes (ie 16, 8, or 4 bits per pixel respectively).
Copy the data you've exported from Usenti/Grit into memory at the address stored in gfx - the value returned from oamAllocateGfx is an address to VRAM where the OAM expects its sprite data to be. So after allocating your sprite, simply copy the data into the memory at that location (using, say, dmaCopy, in libnds) and you'll have your tiles in VRAM.
Save the palettes in memory - libnds has predefined locations for the main and sub sprite palettes, namely SPRITE_PALETTE and SPRITE_PALETTE_SUB. Just copy 16 bit colors into these locations and you'll be golden.
Call oamSet() for each of your sprites - the function takes an OAM manager and an OAM index (ie, which sprite to modify) and sets the various values of that sprite. For example, you call this function to set the x and y position of the sprites on screen. Check the documentation for the complete list of parameters.
swiWaitForVblank() - the OAM can only be written to during a Vblank, while the screen isn't drawing, so this just halts execution until you can change the data
oamUpdate(&oamMain / &oamSub) - this copies the shadow OAM you've been modifying outside of Vblank into actual OAM to change the display. This should be pretty much the last think you do in your main loop.
Those should be sufficient to get basic sprite graphics functioning. IMO it's a bit more obnoxious than doing the GBA graphics with tonclib, but it isn't the end of the world once you get it.
If you're still having troubles with the library, I'd recommend updating your version of devkitPro (just to make sure you're up to date) then looking into their nds examples - particularly examples/nds/Graphics/Sprites. These bits of code are much more topical and also up-to-date relative to the current version of libnds than pretty much any tutorial you'll find online. Therefore no fuss in worrying over whether things tutorials recommend are outdated or deprecated.
One last thing: on tile memory settings:
There are lots of different modes for this, but it comes down to two different factors:
1D or 2D: 1D means each tile is laid out in memory in order that you want to insert them into sprites. This is the easiest and most practical way to use them, IMO. 2D maps the tiles out so that if you were to look at memory in a table, you'd be able to recognize the sprites you drew. This is actually really impractical to use, since a lot of data will be misaligned and you'll have to carefully determine where to store your tiles. TL;DR, don't use 2D mode.
Bitmap vs Non-Bitmap: Bitmap sprites just mean you're using a full 16-bit color for each pixel in your tile, rather than a compressed color that behaves as an index into a palette. This means you can get many, MANY more colors into your sprite, but at the cost of being very very big in memory consumption. Generally I don't think this is necessary, but it's an option.
Memory boundary: One of the settings for a sprite's OAM entry is the starting tile index. Because there are only 10 bits to specify this value, you have two consequences: 1) you have an absolute maximum of 1024 distinct sprites that are possible at any one time, and 2) you can't directly index into the total tiles you have in VRAM, because you'll have far more than 1024. The Memory Boundary setting tells the OAM how far apart to jump between starting tiles: ie, if you set it to 32, then OAM's starting tile 0 will be at position 0 in VRAM, and OAM's starting tile 1 will be 32 bytes later. Likewise, if you set it to 64, there are 64 bytes between tile entries.
The Memory boundary setting is the most complex part of this, but it really comes down to this: check GBATEK for how big the VRAM bank you're using for sprites is. Use the memory boundary that matches the number of kB that the bank is, and you'll be golden. If you use a smaller size, you won't be able to access the full VRAM bank, and if you use a higher size, you'll have fewer unique sprites that you'll be able to define. Neither of these are the end of the world, but they do waste your resources a bit (in general, anyway).
Pardon the long post - I just had to struggle for a long time to figure out how all the sprite settings worked when playing with libnds, and I didn't want to fumble around the internet searching for that information again if I could help it.
3 notes
·
View notes
Text
Terrible Lessons Learned From C (pt. I)
Enums and Bitmasks are the best!
Especially when working on software like a game, you find often certain lines of execution need to change based on what state the program is in. For example, if you're paused, you should skip over all the main update loop stuff, but instead you should run pause menu update stuff. It gets even messier when you start talking internal states in the software.
In C, the nicest way (at least that I've found so far) to accomplish this is to track what state you're in, and branch on those states. In the pause example, you would track "this game is now in paused mode", and at the top of your logic loop you just switch on the enum you created to give nice names to these states, and you end up with something like:
switch (state) { case normal: normalFunc(); break; case paused: pauseFunc(); break; ... default: error(); }
And the thing is, this is perfectly fine for C. You split those functions out to a few different files if they get too messy, and your code actually stays quite maintainable.
But now enter Object-Oriented Land, after training yourself on that sort of system, and you start saying to yourself "Okay, I've got these room things which have a few different options for implementation, so maybe I'll just branch on cases and track a state for the class. Works in C, so it must be good!"
The problem is, this is not so nice and clean as in C. Now, in order to maintain your code, you can't really split out the helper functions as you would in C to keep things clean. They're basically private helper functions, so those functions get stuck in the same file and you bulk up your code massively, giving you a huge headache to deal with down the line.
So, how do you do this nicely with clases? With classes.
Okay, that might sound dumb, but here's the thing: a good portion of the time you have these state-based execution paths in OOP (at least in my experience), it's because you say something like "This room could have enemies, or it could have a shop, or it could...". In C, you're usually looking at "This program should branch depending on user input...".
The subtle difference here is that if a class's "type" isn't going to dynamically change (like it very well might in the C case), you don't need to track it at all, nor do you need to branch. Just create a base class and use inheritance to do your job for you - that's what it's for.
Plus, now you get a huge upshot. In C, if your various states have some shared code between states (I need to update the enemy attacking stuff in standard execution, as well as when the user opens their inventory, but not when paused), you have a mess on your hands trying to get this code to merge. In OOP, just create another nested level and youre golden. Granted that doesn't always work, but if it gets too convoluted, you can probably reorganize the structure to make things work out.
An extension to this idea is tracking values in bit-fields. By doing that, you are able to pack more data into a small space, and vastly optimize large fields of repeated types of data. At some point, you just start doing it by habit in other programming, especially to keep track of these states we just talked about.
This pushes a much more menacing and probably pervasive issue in programming: premature optimization. Yes, if you only need 8 values for some integer, and 16 for another, sharing bits saves space. However, if you just start doing that by default, you make your code almost illegible (because shifts and masks to get values is a lot less intuitive to an outside party than just calling a property name), and further very difficult to modify later.
But I'll get to these other issues in more depth in a later post.
#Altik-Reference#programming#c#bad habits#enums#bitmasks#oop#object-oriented programming#there will be more of these periodically as I think of significant things to say
1 note
·
View note
Text
NDS Development: Debugging!
For the longest time now I've been doing NDS development and thought there was no way to do any debugging. For the most part I've just slid by on my wits of figuring out what might cause issues. Actually, it's kind of impressive the bugs I've found by just thinking about the issue - but it's completely unreasonable to do this all the time! It takes me a day or two to figure out just where the issue lies in most cases, and 9 times out of 10, once I figure out where the issue is, the issue is pretty much obvious.
Of course, that's where a good debugger comes in. As I said, I figured all this time that there was no option for doing debugging on the NDS. Generally speaking, this is correct: obviously the device doesn't come with a console window or GDB built in, and even if it did, there's no good way to step through source code on there. And most emulators aren't going to be nice enough to do much beyond just, y'know, emulating the device.
However, it turns out that the creators of DeSmuME are kinder than most developers! If you get the right distribution (currently, the only version I've found for windows is 9.6, 32-bit version, but I can't say for sure if there aren't newer ones - just don't get an x64 build), there is an included executable, DeSmuMe_dev.exe. This executable includes a flag: --arm9gdb=[PORT NUMBER] which turns the emulator into something like a debugging server.
Short version (to my understanding): running the dev executable with that flag and a filepath to the executable you wish to debug will launch the emulator, streaming out debugging information over a local port. Then you can find an IDE that can hook into such a server (the WinGDB extension for Visual Studio works nicely if you have a full version - a tutorial for setting up everything can be found here: http://vgcoding.blogspot.com/2011/01/how-to-debug-your-nds-rom-using-visual.html), and PRESTO: you have exactly the same sort of debugging you'd expect from any other programming project!
This is seriously godsend, I promise. Memory bugs crop up all the time on the NDS, since there's no dynamic allocation. You just get your pointers when you get them, and oftentimes you accidentally reuse them or tell your half-baked allocator to double-free a pointer, causing everything to crash.
Most commonly, I see this pop up when working with libnds's pre-cooked OAM "allocator", which, upon a request to allocate a sprite, will just hand you a pointer to VRAM to put your tiles and give you nothing else. All your subsequent calls to their shadow OAM setup expects not only that handed out pointer, but also a unique ID and various other stats.
This all means if you expect to have even remotely complex allocation of sprites in your system, you'll have to bake up your own allocator for IDs just to make sure you don't overuse them (fun fact: give an ID that's too big, and their shadow OAM will silently fail an assert and the emulation freezes).
Blah blah, rant rant - the point is, the debugger is real and helped me spot a nasty bug with my interactions with the libnds OAM manager in a few minutes, that would have otherwise taken me a day or two to find.
Hope this helps someone out in the aether!
1 note
·
View note
Text
How to upload a Maven Plugin to a private repository
AKA: Stupid problem for this two-week sprint. I was working on a fairly simple Maven plugin for my employer's deployment, and all seemed well until it needed to be put on the local repository. If you are not familiar: Maven is a build tool that, among other things, automatically searches for external dependencies in online repositories. If your project wants to depend on another internal project, you can specify other repositories to scan through, then host a server with more or less a fancy FTP setup to act as that repo. Anyway, plugging objects in is fairly straightforward. Put JARs into the directory system organized by project titles, and include dependency info for those JARs with a Maven metafile (known as a POM). So I got everything packaged up, amd sent it to the guy with access to the repo for putting it in the box. Wouldn't you know, though? Didn't work. Took me ages to track this down (hence the post), but what ended up being the problem is a dumb "feature" from days old: http://stackoverflow.com/questions/14396017/difference-between-repository-and-pluginrepository In short: Plugins are Java projects that incorporate into the Maven build process itself. Maybe it's a preprocessing step, or a test framework. Things like that. But at the end of the day, they are just another Java project. Yet, inexplicably, Maven allows (and apparently mandates) that you specify the repository for plugins separate to the repository for other dependencies. Those repositories can still be the same place, but they must both be explicitly defined. Gross. Anyway, posting this to provide a bit more awareness of this, because it got me sidetracked for about a weeks worth of time. Blah.
0 notes
Text
Android Dev Protips! -- Why isn't NotifyDataSetChanged working?
Fun bug from this afternoon: let's say you have a custom ListAdapter you're working with to fill in data into a ListView, and you're trying to implement a method which allows you to insert a new element into that list. You're competent with the use of Adapters, so you're confident it works - in fact, the model for your add function matches most of the recommendations on StackOverflow:
So, you're confident everything is looking good, yet whenever you actually go to add the new string, the list doesn't update! It comes up when you leave and return to the activity (because you're properly serializing elsewhere in code), but it won't update on the spot.
So you go hunting further into StackOverflow to see if there was something subtle you missed about how to hook up a ListAdapter properly in your ListFragment. Maybe using setListAdapter doesn't properly set up your fragment's ListView as an observer for the adapter? That wouldn't make much sense, but you aren't sure. Perhaps it was something else...
Regardless, you spend a few hours hunting, and nothing comes up. All the errors with NotifyDataSetChanged are pretty dumb errors regarding simply misusing the class, which you've confirmed many times over by now that you haven't done.
So what's the problem?
Well, in case you had the same issues I did this afternoon, it may be that you accidentally let Eclipse fill in your functions for you, so among them you overwrote registerDataSetObserver and unregisterDataSetObserver, and they're filled in with TODO messages rather than calling the superclass's versions. WHOOPS.
The fix: delete those functions, dummy - you're not doing anything in there anyway!
0 notes
Text
How to share files from Mac to Windows
Step 1) Ensure you're on the same subnet (Google will tell you!)
Step 2) On windows: go to the folder you'd like to share, open properties, go to Share tab, click Share... button, click Share at bottom.
Step 3) On mac: go to finder, go to the Go tab, select Connect to Server... (or Cmd-K). Connect to smb://[window's IP address here].
The mac will prompt to mount your shared folder as a drive on the mac, and you can now share files! :D
0 notes
Text
NDS Development: Don't always trust the tools...
Fun fact: sometimes when you try to compile your code, the devkitPro compiler will not be happy and have compiler errors, but still reports back a successful compilation. Generally that's not a problem so long as you're not relying on the bottom message to decide if your code worked or not, but if you're developing in, say, Visual Studio, and your IDE is just reading the last bit to decide if stuff worked, and you always launch into the emulator right after building, you'll just keep seeing an old version that isn't new and your bugs won't go away and you'll spend a lot of time working out things you've actually already fixed...
TL;DR, just read the output logs when you have bugs - it could save your life. :D
Also: DeSmuME does not properly emulate the fact that the background scroll registers are write-only (ie, you can't use += with them and expect that to work on hardware, but it will be fine on the emulator). Why are they write-only? *shrug* But you need to know that if you want your background to not be static (which you probably will).
0 notes
Text
Note to self:
If you're thinking of using sentinel values in a return from a C# function, because you really need an int but you also need to check if something is true, why not just return a bool and remember you've got out parameters?
You're welcome. :)
#Altik-Reference#programming#C#out parameters#in C it's easier because everything is pointers so you can just do it anyway#so it's not hard to remember
0 notes
Text
NDS Development: Key detection is easy - but the documentation sucks
Just working to get some simple key-input management working for this demo project, and I've had to spend about an hour or so just trying to work out how the key-input functions for libnds work. Those functions are _really_ poorly documented, you guys.
So since I've finally figured out what most of them worked (pretty much the simple ones - the ones I figure most people will probably use most of the time), here are some details in case you need them:
scanKeys() is a function that will grab the current state of button presses. Internally the library manages differentiating between presses and holding (ie, your character moves as long as you hold the d-pad, but you only slash your sword ONCE when you push a button). You can see which buttons are being heldby using the keysHeld() function, and which buttons are being pressed by using keysDown().
keysHeld() and keysDown() return unsigned, 32-bit integers which are matched with an enum of button presses elsewhere in the library. These are just titled KEY_x, wherexis the key you're interested in (ie, SELECT, A, B, etc.). This also contains information on whether the DS's lid has been closed and whether the touchpad is being touched.
If you want to use data from the touchpad (eg, which pixel coordinate is being touched), you should call the touchRead() function. You pass in a pointer to a data type that holds all the information you'd want called a "touchPosition". The documentation for that struct is pretty decent IMO, so I won't go into it here.
One last thing: keysHeld() and keysDown() return values that are bitwise ON if the represented button is HELD or DOWN. That is, a bit is true if and only if the button IS held/down. This is worth noting, because it's opposite from how the KEYINPUT register on the hardware works. It's the nicer way to do it, so nothing to complain about, just don't let it throw you.
#NDS Homebrew#Altik-Reference#programming#I made a simple doodle app#though it doesn't quite do what I'd like since I can't get it to draw on the bottom screen for some reason :(
0 notes
Text
NDS Development: LCD Mode
When looking at the various video modes you can set in libnds, some are referred to as "LCD Mode(s)". I couldn't find anywhere where this was explicitly said, but this just means you'll be using Bitmap mode: ie, you use a block of memory to represent explicit pixels, and you'll store 16-bit colors in those positions to draw to the screen.
From my understanding, the GBA's bitmap mode works the same way, so if you're still confused what this means, search for "GBA Tonc" on Google - there is a bunch of GBA development information on that site, and it's a very good resource if you're interested in that sort of thing. Among the pages is tutorials on GBA's video display modes - the very first demo uses Bitmap mode and explains how it works.
#NDS Homebrew#Altik-Reference#programming#generally speaking using bitmap mode is really bad#but it is useful for simple demos if you're just trying to get a grasp on programming for the console#at least IMO anyway
0 notes
Text
NDS Development: ARM7 vs ARM9
So it turns out that the Nintendo DS has two different CPUs, each which use different assembly code (Why they thought this was a good idea I can only guess...) - one ARM9 and the other ARM7. Generally, the ARM9 processor will be sufficient to do your game processing. However, several important components are controlled exclusively by the ARM7 processor: most significantly the Touchscreen.
Solution: if you're using the libnds library provided with DevkitPro (which I'd highly recommend, and most other sources seem to as well), this is already taken care of for you. Basically, there is a simple ARM7 executable that is produced by the DevkitPro template makefile that will manage the important interrupts that processor is required for. Then it ships this information to a common global accessible by both processors so you can use it in your ARM9 code. Hence you should only ever need to write ARM9 code unless you're going especially hardcore.
The real significance: When you're looking into NDS development, there is a TON of references to ARM9 vs ARM7 shit - especially in the libnds documentation. Basically, you can (in general) just ignore things limited to the ARM7 processor, and not be concerned when something is limited to the ARM9 processor.
#NDS Homebrew#Altik-Reference#programming#ARM7 vs ARM9#Still really not sure why Nintendo felt the necessity to put two different types of processors in their machine#but w/e I guess
0 notes
Text
Instead of watching Nintendo's E3 panel...
...I decided I'd look into NDS homebrew development. Writing programs to work with the touchscreen just seems too much fun to pass up lol.
Also, this means I'm going to be writing some reference pages for myself as I figure things out so I don't lose progress as I'm reading. No nice tutorial that I can find for this stuff like there was for the GBA ( Cearn, y u no make a ToncDS? D: )
Feel free to just blacklist the "NDS Homebrew" or "Altik-Reference" tags if you don't want to see the techno babble I'll be spewing this afternoon.
#NDS Homebrew#Altik-Reference#forewarning#hopefully I can get something simple churned out today#but it really is retardly difficult to read these stupid tutorials#ugh
0 notes
Text
Lessons learned from puting computer parts together
Just a bit more reference for my future-self. Feel free to ignore it.
1) If the side panel has a fan on it, DON'T CONNECT IT BEFORE YOU'RE DONE D:
2) Do a bit more research into the PSU - apparently modular doesn't always mean modular? Either that or Corsair just felt like lying about their product to me. God those giant ass cables that I couldn't remove were such a pain to work around. >__<
3) Having a bit open space to work on the computer would beverynice. I shan't be attempting to build a computer in my tiny ass room ever again. +__+
4) There is a bit more research to be done on the Motherboard apparently. For example, not all motherboards have firewire as a standard feature. Not that this is a big deal, but being aware of these things before you're sitting at the case trying to figure out which pins to plug your cable into only to find out an hour later that they didn't exist is a bit obnoxious to say the least. ;)
5) Give yourself plenty of time, and don't expect to be done very soon. Building isn't very complicated (at least, I don't think so anyway) but it is a fairly precise procedure.
6) RTFM - especially the Motherboard's installation manual.
7) Don't panic. There were a few times I was working on this and was concerned that the case I'd purchased wouldn't work with my motherboard even though I'd done research to convince myself it would. Turned out that there wasn't any problem (go figure), and I had just been freaking myself out and got caught doing nothing for twenty minutes just because I was too worried about what might happen.
Granted, you don't want to break your expensive parts, but if you just sit there, it isn't helping anything either.
#Altik-Reference#computers#diy#build a PC#also I didn't really learn how to get cables to work but I really see why you want them to now. XP
0 notes
Text
Math problem that I should have gotten immediately but took me forever instead derp
Problem:
Prove that if R is a rectangle; f, g : R -> ℝ; and there exists some K in ℝ such that |f(x) - f(y)| <= K|g(x) - g(y)| for all x, y in R, then Mf - mf <= K(Mg - mg), where Mf is the sup of f on R, mf is the inf of f on R, Mg is the sup of g on R, and mg is the inf of g on R.
Proof:
Towards contradiction, assume that Mf - mf > K(Mg - mg).
By the definition of sup and inf, we know that for any ε > 0, there exists an a, b in R such that:
(Mf - f(a)) + (f(b) - mf) < ε
In particular, we can set ε = (Mf - mf) + K(Mg - mg). Then we can rearrange terms, giving us:
(Mf - mf) - (f(a) - f(b)) < (Mf - mf) - K(Mg - mg)
This implies:
f(b) - f(a) > K(Mg - mg)
For this to be the case, f(b) > f(a), because otherwise f(b) - f(a) < 0 < K(Mg - mg). Hence:
|f(b) - f(a)| > K(Mg - mg)
But (Mg - mg) is clearly greater than |g(b) - g(a)|. Therefore:
|f(b) - f(a)| > K|g(b) - g(a)|
Which is a contradiction to our initial assumption that |f(x) - f(y)| < K|g(x) - g(y)| for all x, y in R.
qed
#Altik-Reference#math#analysis#proof#sups and infs#Sorry to bother you guys with math#but I really did spend an absurd time on this#and it might come in handy later
0 notes
Text
IPhone Lockscreen not displaying Album Cover
Some oddities on iOS 4.2 seem to cause new music imported to the device to not display album art. SOLUTION: Restarting the device should do the trick.
0 notes