Domain-Driven Designer, Hexagonal Architect, Event Sourcer, Modeler, Coach | Trainer | Gardener of autonomous teams; 20+ years of experience in software development; currently programming with Golang; father of 2
Don't wanna be here? Send us removal request.
Text
How to onboard Software Crafters
As most seasoned software crafters I had good, mediocre and bad onboarding experiences throughout my career. Likewise I’m sure I was involved in giving new developers joining a team I was working in such experiences from full range.
Here’s some lessons learned & ideas how to do it right.
Make the newcomer feel sincerely welcome!!!
Do I really have to write anything here? Should be a nobrainer!
This piece is probably the most important. If they feel welcome it could even compensate for onboarding issues further down the line.
Give the onboarding process the highest priority
Don’t allow the onboarding process to run on the sideline.
Don’t allow situations where the team does not have all the time needed to support the process.
Instead make it the highest priority of the team for one or two sprints, two to four weeks, whatever dev process the team has. For example in a Scrum setup it could be the sprint goal(s) like “Onboad Sally“ (sprint n), “Make Peter productive“ (sprint n+1).
Make sure to block the necessary time somehow! If a new developer get’s up to speed quickly the ROI will be massive.
Have a plan
Don’t just let onboarding happen, have some process / plan ready. Some people suggest to have a detailed schedule for first x weeks, months. For me personally this sounds like overkill, but people are different.
Have some standard sessions ready and scheduled
General company onboarding session (independent of role)
HR stuff like contacts for this and that, internal IT ticketing like ordering hardware, network or other problems, etc.
General session for IT / software dev / …
Structure of departments, important contacts, generic technology stuff.
Multiple sessions that are specific for the area the person will work in
Technologies, top level architecture (micro services, cloud providers, …), development processes like agile, CI/CD, …
Multiple sessions specific for the sub-area / team the person will work in
Same as above but more specific, services owned by the team and how they collaborate with their clients, walk the new developer though the software taking the role of a customer (e.g. install the App, register, go through most important use cases)
Lunch dates with everybody in the team can be a good idea
Might be obsolete if the team regularly goes to lunch together.
Assign contact persons for the onboading phase
A mentor
Could be outside of the team for company level questions, might be assigned for many months.
A coach
From inside the team, might be assigned for a shorter period like weeks, might also be a rotating role inside the team.
Hardware and software setup
All the hardware MUST be ready from day one!!!
Well, often day two would be ok, but absolutely make sure newcomers are never in the situation that they are blocked by missing hardware! Totally horrible experience!
Pair on setting up the hardware if necessary but definitely on the full development software stack. This might be a challenge in heterogeneous situations. For example if the new hire is the first person in the team to use Mac, Windows or Linux xyz.
Pair or mob as much as it makes sense in the first weeks
I think it’s very important to talk to the newbies first, to find out how they feel comfortable. Mind introvert vs. extrovert and other personality types. This is not easy and requires empathy and experience. Some people might love to do only mob programming and pair programming until they feel they are fully empowered, some might need a mix with more time alone. Some might want to take the driver seat immediately when pairing, some might prefer to be co-driver at the beginning. The introvert types might like to learn a lot by doing code reviews. Lot’s of possibilities …
Expectations regarding productivity
I find this a tough one. It would be easy to say you don’t expect any productive output for x weeks. On the other hand the perception of the rookie might be negative, like the team has no trust / confidence or the whole environment is not safe to fail or the software is very fragile. Maybe a good starting point is to let the new developers decide when they want to commit something.
Summary
Don’t allow the onboarding to fail!
Hiring is hard and expensive (even if you do not invest enough time)!
There are probably not many bigger mistakes a company can make, than having a newcomer quit at the end of the trial period (in countries where this exists - e.g. in Germany it’s 6 months by default) or after a couple of weeks or months.
So for gods sake, make the onboarding process your highest priority, block time and have a plan!
0 notes
Text
I tried to go magic with a little reflection in Go(lang) ...
Unlike (some) other programming languages in Go reflection is considered ok for using it in production code. Which does not mean it should be used without a good reason.
I tried to implement a command handler in a different way than switch/case given the interface of the command handler can stay as it is:
type CommandHandler interface { Handle(command Command) error }
The advantage of having a single entry point is that it’s simple to wrap each handled command with generic functionality, e.g. transactional stuff. The only disadvantage of using a switch/case it that this is quite ugly, especially if the command handler grows and has to handle lot’s of commands.
Obviously I tried to solve this with reflection and I got it working:
package application import ( "errors" "myproject/customer/model" "myproject/customer/model/commands" "myproject/shared" "reflect" ) type commandHandlerWithReflection struct { customers model.Customers } func NewCommandHandlerWithReflection(persons model.Customers) *commandHandlerWithReflection { return &commandHandlerWithReflection{customers: persons} } func (handler *commandHandlerWithReflection) Handle(command shared.Command) error { if command == nil { return errors.New("commandHandler - nil command handled") } method, ok := reflect.TypeOf(handler).MethodByName(command.CommandName()) if !ok { return errors.New("commandHandler - unknown command handled") } in := make([]reflect.Value, 2) in[0] = reflect.ValueOf(handler) // method receiver in[1] = reflect.ValueOf(command) // first input param - the command response := method.Func.Call(in) switch response[0].Interface().(type) { case error: return response[0].Interface().(error) case nil: return nil default: return errors.New("commandHandler - unexpected type returned when command was handled") } } func (handler *commandHandlerWithReflection) Register(register commands.Register) error { newCustomer := model.NewUnregisteredCustomer() if err := newCustomer.Apply(register); err != nil { return err } if err := handler.customers.Save(newCustomer); err != nil { return err } return nil } func (handler *commandHandlerWithReflection) ConfirmEmailAddress(confirmEmailAddress commands.ConfirmEmailAddress) error { customer, err := handler.customers.FindBy(confirmEmailAddress.ID()) if err != nil { return err } if err := customer.Apply(confirmEmailAddress); err != nil { return err } if err := handler.customers.Save(customer); err != nil { return err } return nil }
But there is one issue with that solution: The methods which the Handle() method dispatches to should be private, as they are not part of the public interface!
Even if NewCommandHandlerWithReflection() would return an interface instead of the concrete object, which is not idiomatic in Go (and I will only bend this rule with a very good reason), any client of the command handler could still simply typecast it to the concrete object.
Unfortunately Go's reflection does not support unexported methods. The conceptual reason is that this would render the whole exported/unexported paradigm quite useless. I wish it would support that from within the same package, like in my usecase. There might be technical reasons apart from conceptual reasons, though.
So once more when I played with reflection in Go it did not end up in an acceptable solution, but at least I have learned how to do it. ;-)
Luckily there is no real issue with using switch/case for dispatching instead, so I'll just stick with that.
0 notes
Text
Implementing Domain-Driven Design with Go (Golang)
I recently joined the Domain-Driven Design group on LinkedIn Domain Driven Design and the group owner David asked me:
"How do you like golang? Any features of go that make it work well with DDD or micrososervices?"
So here we go, I'll use that stimulus to start blogging again.
A bit about my background and history with Go
Before I started programming with Golang in my last company I did PHP for almost 2 decades (true story). Last PHP version I worked with was PHP 7.0, which had all relevant features to do SOLID OOP, with the exception of strict typing (that was added later). My team was switched to a new greenfield project working with Go, in a Microservice environment. We did DDD with EventSourcing from the start. Some more tech stack namedropping: Kafka, PostgreSQL, Docker, Kubernetes - backend services talking RPC with "go-micro" as their framework.
Now we had a similar journey getting into Go as many other devs who were also coming from OOP, we started trying to turn Go into OOP, which is generally not a good idea. Actually, if you favour "composition over inheritance" there is not SO much of a difference, at least to PHP (PHP has no generics, no method overloading, no other OOP Voodoo). Most of the differences lie in what's idiomatic in Go, which can savely be ignored given developers make educated decisions, for example by declaring higher level concepts more important than language concepts.
Some details about Go and it's challenges
There is one important difference to OOP languages, which is how interfaces work in Go. You don't declare an object in Go to be implementing an interface, instead it's "duck typed". So anyhing that fulfills all method signatures of some interface just magically implements it. In Go it's common to not define interfaces on the implementation but on the client side, or put other way, every client using the implementation of some service can bring it's own interface and use interface segregation (e.g. if only a portion of the service's methods are needed). I totally like that, but it conflicts a bit with hexagonal architecture aka. ports and adapters. Luckily this is a case of "idiomatic Go", so here's our good reason to ignore that. As an example, in DDD you would define the interface for a Repository or a DomainService in the Domain package, in Go you would define it where it's used, e.g. in the Application package.
Some other common ways of doing things in Go are probably not even described as "idiomatic", e.g. creating objects like structs on the fly without using factory methods, which is only possible if the internals of such a struct are public (Go calls it exposed). As DDD practitioners we want our ValueObjects immutable, so we'll keep our stuff private and have proper factory methods.
I plan to write about how I would implement things (VOs, Entities, etc.) in Go nowadays - after 2.5 years of learning and changing my mind back and forth at least 5 times about every detail.
My friend James has blogged about BYPASSING GOLANG'S LACK OF CONSTRUCTORS which he could also have named "How can I avoid all those nil pointer checks in Go?", it's very interesting!
Spoiler alert -> With quite some effort you ... still can't fully avoid those nil pointer checks (but there are some ideas how to reduce them)
What makes Go a good fit for DDD / Microservices
Speed
There are faster languages around, but Go runs fast and it compiles very fast, mostly because of the simplicity of the language. And it's getting faster, afaik mostly because of improvements in the garbage collector. For people like me - coming from an interpreted language like PHP or Python - it's fast as lightning, which raises the bar for (premature) performance optimisations quite high. I profiled our stuff a while ago and there was no worthwile target for optimisations in our event-sourced application (unmarshalling was the biggest "thing" but only accounted for around 10% of runtime). To give a rough idea about speed - we could easily serve any command that did not have to make synchronous calls to other external services under 10 ms. With full reconstitution of the Aggregate, no snapshotting or other caching involved.
Simplicity
As Microservices tend to be small they also tend to be relatively simple (this is for sure a simplification), so a language with simplicity as a core concept is a natural fit. I'll put another fact into this catogory, which is the lack of sophisticated frameworks, ORMs, etc. For sure frameworks and ORMs exist, but they tend to be more lightweight and none of them is a de-facto standard which you have to use because everybody does it and then suddenly you're spending more time fighting the accidential complexity of such things instead of solving your domain challenges.
Concurrency
I totally love Go's way of concurrency with goroutines and channels to synchronise them if necessary. As an example, we have implemented a very efficient and hight throughput event publisher, which publishes Domain Events after they were commited to the Event Store. Often all you need is to fire and forget a goroutine to do things that must not block your main application flow, e.g. sending notifications (emails) given it's not critical if the sending fails.
Summary
I like Go alot, especially for not too big services. It takes a while to find your own way through Go, SOLID, DDD, hexagonal architecture but I think it's worth it, because it forces you to think about concepts, instead of blindly applying them. I still love PHP for various reasons (community, eco system, ...) but would not switch back without a very good reason (e.g. a new job requires it).
0 notes
Text
Run virtual machines with all versions of Internet Explorer on VirtualBox
There is this nice project on Github that makes it very easy to get virtual machines with all versions of Internet Exporer running inside VirtualBox to be able to test your websites in web development against them:
xdissent/ievms
IE6, 7 and 8 will run in WinXP, IE9 will run in Windows7 and IE10 in Windows8 per default.
I will explain how I got this working on Ubuntu 12.10 (Desktop) but generally this should work similar on all Linux OS's and my co-worker has it running on OSX.
For virtualization to work at all your CPU must support hardware virtualization and it must be enabled in the BIOS.
It is important that you install the header files for your kernel before you install VirtualBox, so it can compile the right kernel modules!
sudo apt-get install linux-headers-generic
After that you're ready to install VirtualBox, you can choose the vendor packages or the OSE (open source edition):
sudo apt-get install virtualbox virtualbox-guest-additions-iso
or
sudo apt-get install virtualbox-ose virtualbox-ose-guest-additions-iso
During the installation you should see the compilation of the needed modules, that's why I suggest to install in a shell and not with synaptic or software center.
Now you'll need curl and unar before running the script to install the virtual machine images:
sudo apt-get install curl unar
Finally, execute the command shown in the ievms Github project above:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | bash
Alternatively you can install the images one by one, as the full install takes very long. Maybe start with the 3 old IEs that run on XP:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS="6 7 8" bash
Vloila, after the install is finished you should be able to run your virtual machines and test your website on the different IEs!
I had to change the DNS in all virtual Windows installations on my PC and I used the two public Google DNS servers: 8.8.8.8 and 8.8.4.4 but after that everything runs fine.
Happy hacking, Anton
0 notes
Text
Gearman job server with mysql persistence on Ubuntu
Finally, I found the answer how to get a gearman job server running on Ubuntu (12.10 server amd64 in my case):
Install gearman-job-server on Ubuntu 12.04 LTS from PPA
This build includes the mysql plugin, not the drizzle plugin and it worked out of the box with a standard MySQL 5.5 server installed via standard Ubuntu package.
Have fun, Anton :-)
0 notes
Text
Remove old kernels and headers on Ubuntu / Debian
After each kernel update you have one more kernel + headers + modules filling your boot partition. If you have a seperate boot partition with limited size you might want to free that space and here's how that's done on the shell.
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
Before you do that you should be sure that you're not running on one of the kernels that will be purged:
root@db3:~# uname -a Linux db3.mvs-corp.com 3.5.0-27-generic #46-Ubuntu SMP Mon Mar 25 19:58:17 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
If so you should reboot, to run on the latest kernel you have installed.
Also you should run only the part of the command that lists the packages that will be purged, before you really purge them:
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'
Alternatively you can remove the -y switch from the full command above, so apt will ask you for confirmation before applying the changes.
Enjoy, Anton
0 notes
Text
The light year - an often misunderstood unit of length
Dear advertisers / media / others: A light year is a unit of length (a quite big one) and nothing else! Go ask Wikipedia if you don't belive me: http://en.wikipedia.org/wiki/Light_year
Now, with that in mind ... sentences containing stuff like that are pure bullshit:
... surfing light years faster ... (no, it's not a speed unit)
... takes light years ... (no, it's not a time unit)
Please stop misusing our most impressive unit of lenght for the sake of Einstein!
:-)
2 notes
·
View notes
Text
Howto fix: Tomboy sync fails with Ubuntu One
I have two computers syncing tomboy with my Ubuntu One account on Ubuntu 11.10 (Oneiric). One one computer it worked without problems, but on the other the sync failed with this error (found in "~/..config/tomboy/tomboy.log"):
Synchronization failed with the following exception: A note with this title already exists: New Note Template
in german:
Synchronization failed with the following exception: A note with this title already exists: Neue Notizbuchvorlage
I couldn't find a note in tomboy with that name because it is a template, which does not show up as a note. After some searching I found this: Tomboy fails to synchronize its notes with ubuntu one (scroll down to the tip from Martijn).
It did not work the way Martijn suggested, but similar:
go to tomboy settings, open the new note template and change the title
start the sync, it should work now
open the new note template again and delete it
sync again
That way you should only have one new note template on all your computers and a working sync!
Enjoy!
7 notes
·
View notes
Text
How to make Zend Studio start from Ubuntu Unity launcher
If you "install" Zend Studio on Linux it's nothing else than extracting the .tgz archive. When you start it and pin it to the launcher there are some issues:
the Zend Studio icon is not used
when you try to start it from the launcher it does not start
The solution is quite simple.
Create a launcher icon with some editor:
~/.local/share/applications/zend-studio.desktop
Add the following content to it:
[Desktop Entry] Name=Zend Studio Exec=/path/to/ZendStudio/ZendStudio Icon=/path/to/ZendStudio/icon.xpm Terminal=false Type=Application StartupNotify=true
Make the file executable:
chmod +x ~/.local/share/applications/zend-studio.desktop
Then, navigate to that location via Nautilus (press CTRL+H to |un|hide files), run the “Zend Studio” icon and pin it to the launcher.
12 notes
·
View notes
Text
My MongoDB cache backend adapter for Zend Framework is on Gibhub
I just put my MongoDB cache backend adapter (for Zend Framework 1.11, not for ZF2!) on Github. It's a fork from Stunti (Olivier Bregeras) with some fixes and improvements, so Oliver did the hard work, not me!
You can find it here: Zend_Cache_Backend_Mongo
Details about Zend_Cache can be found here: Zend_Cache
MongoDB is very easy to install on Debian / Ubuntu, it's not more than a
apt-get install mongodb-server
to get the server running, listening (only) to localhost. DBs in MongoDB are created when they are requested, so you don't need to do anything apart from getting my cache backend running. :-)
I did some performance tests with it and compared to Memcache, here are my numbers:
Memcache
inserted 1000 items in 201ms (items/second: 4986) loaded 50000 random items in 5455ms (items/second: 9165) 1000 cache miss requests 109ms (items/second: 9153) Memory peak usage: 6029312 Byte (5.75 MB)
MongoDB
inserted 1000 items in 339ms (items/second: 2946) loaded 50000 random items in 5747ms (items/second: 8700) 1000 cache miss requests 115ms (items/second: 8660) Memory peak usage: 6029312 Byte (5.75 MB)
So MongoDB is a bit slower in writes and has almost the same read performance.
But the big advantage is that MongoDB supports tags which Memcache doesn't, so you can use the MongoDB backend adapter standalone (no two levels cache) with the full feature set (also the data is persistent in MongoDB) and almost the performace of Memcache. You can also use replication or even sharding on MongoDB, but you need to set this up for yourself.
Enjoy!
P.S.: Once I'll find the time I'll describe how MongoDB can do even better as a caching layer that caches your data as complex objects (including arrays) that supports atomic updates (even increments) without the need to read the data first.
41 notes
·
View notes
Text
Zend_Cache with backend adapter "file" can be Ouch!
Thx to the brilliant work some guy did ;-) to make Two Levels Cache work in Zend Framework I was able to find out how bad a Zend_Cache with the file backend adapter (used as the slow layer in our Two Level Cache) can be performance wise.
With the help of XHProf I was able to find out why some requests in our web application took several seconds (causing this error). Using a quite long cache timeout (24 hours or so) and having lots of cache items the cleanup which the file backend adapter has to do on each x'th request (configurable) was very heavy. It had to go through ten thousands of files to find the ones to remove (as they were timed out), which is something that needs some time.
I solved this problem by using a different cache backend adapter for the slow layer. The winner for this task is MongoDB with some code I found on Github to build a Zend_Cache_Backend_Mongo. I don't link to it as I had to do some changes to the original code. Will put my version on Github and drop another post here once this is done.
Conclusion is that using the file adapter for Zend_Cache only makes sense if you have a small number of cache items (maybe up to 1000 or 2000 is ok), otherwise you'll get into serious performance trouble sooner or later.
3 notes
·
View notes
Text
Why doing keepalive http requests against a web service can be dangerous
Well, we have a service (implemented with Zend Framework) which we call from our main web application via Zend_Http_Client. It was using keepalive connections to do the http calls. We got an error saying "Unable to read response, or response is empty" quite often. After some investigation I found the reason:
our Apache was configured with KeepAliveTimeout 5
some main web application requests that did calls to our service took longer than 5 seconds (caching problem)
the keepalive connection was opened at the beginning of the long request and was used several times within the request so that we tried to use the same keepalive connection within more than 5 seconds
obviously Apache had closed the keepalive connection after 5 seconds -> bang
So the easy fix was to disable keepalive in our service calls.
Generally I would suggest to avoid keepalive in such situations. The performance win is very small but it can be a potential point of failure. If you need the last bits of performance, you need to be sure that your keepalive connections are always shorter than your web server's KeepAliveTimeout.
OR you must catch the errors from closed keepalive connections and re-establish them, which is not very complicated in Zend Framework.
61 notes
·
View notes
Text
Profiling a PHP application
There are quite some ways to profile PHP applications, as you can for example read in Eric Hogue's Blog. We've been using Xdebug with KCachegrind at Yahoo! after rolling out the new Movies page for Europe. Compared to XHProf (check documentation here) with XHGui that's quite complicated and you can't use it in a production environment - at least not on a regular basis (aka just keep it running).
If you want to give it a try you don't have to download and manually install (as Eric suggests) because it's a pecl package, so you can simply install it with:
sudo pecl install xhprof
Then you should follow the git installation as Eric writes under XHProf. I installed it under "/usr/local", so changed to that directory and issued the "git clone" command there having the XHGui stuff under "/usr/local/xhprof" now.
If you want to have nice call graphs (see Eric's blog entry for a sample image) you'll have to install Graphviz which on Ubuntu/Debian is nothing more than a:
sudo apt-get install graphviz
I recommend to follow Paul Reinheimer's blog about using XHGui - he's the author of that package.
Here are some configuration hints for the "config.php" file, as Paul's blog entry and the comments in the file don't make everything clear:
$_xhprof['display'] = false; $_xhprof['doprofile'] = false;
The display param only needs to be true if you want a link to the profiling result page in the XHGui which imho only makes sense if you "manually" profile with ?_profile=1. If the doprofile param is true than every request will be profiled - not good for a production environment. Use the weight param to profile only some requests and you should be fine.
Only one thing does not yet work as expected in my setup - only page requests via browser seem to be profiled (or logged by the XHGui). API like request (via Zend_Http_Client in my case) don't show up. I'll update this post once I find the answer.
Read more about PHP profiling in those slides from Justin Carmony.
17 notes
·
View notes
Text
How to fix Two Levels Cache in Zend Framework
This posting describes how to fix / use the Two Levels Cache in Zend Framework 1.11.11 so it works with Tags for cleaning items. Following preconditions apply:
Memcached is used as the fast_backend
File is used as the slow_backend
still my posting should also be true for other backends used
you want to store your items in the cache with a long (maybe infinite) lifetime, give them Tags and purge them with those tags (you can read about tagging and cleaning here) ever time you change data in your application, e.g. add objects or update them
First issue - infinite lifetimes in your fast_backend
There is a little bug in 1.11.11's Two Levels Cache code that results in a lifetime of "null" for each item in the fast backend, which (at least for most backends) equals in an infinity lifetime. My patch below fixes that.
Second issue - default priority of "8"
The default priority for the fast backend of "8" leads to a lifetime that is one third of the lifetime that you set in your options and that is used for the slow backend. This does not cause errors but a non optimal performance, as your fast cache purges items way to early. There is no easy way to fix the code so you can set the priority as there are multiple issues which prevent the priority to reach your fast backend. My patch simply changes that to "10" which is the behaviour that I would expect and which leads to optimal performance (all read requests hit the fast backend, unless your fast backend has rotated out items due to a high filling percentage).
Third issue - default for option "auto_refresh_fast_cache" is "true"
auto_refresh_fast_cache Boolean TRUE if TRUE, auto refresh the fast cache when a cache record is hit
Easy to fix ... just set it to "false" in your Two Levels Cache config. ;-)
Otherwise each time a item is hit in the fast backend it will update it's lifetime to the original one, so it does not expire untill there is no hit for the lenght of your lifetime. The slow backend's lifetime is NOT extended the same way, so you end up with items in your fast cache that you can not clean via Tags as they are purged from your slow cache.
The patch
--- TwoLevels.php 2011-07-22 14:04:41.000000000 +0200 +++ TwoLevelsPatched.php 2011-10-27 13:16:31.854461149 +0200 @@ -187,7 +187,7 @@ * @param int $priority integer between 0 (very low priority) and 10 (maximum priority) used by some particular backends * @return boolean true if no problem */ - public function save($data, $id, $tags = array(), $specificLifetime = false, $priority = 8) + public function save($data, $id, $tags = array(), $specificLifetime = false, $priority = 10) { $usage = $this->_getFastFillingPercentage('saving'); $boolFast = true; @@ -480,19 +480,18 @@ */ private function _getFastLifetime($lifetime, $priority, $maxLifetime = null) { - if ($lifetime <= 0) { - // if no lifetime, we have an infinite lifetime + if ($lifetime === null) { + // if lifetime is null, we have an infinite lifetime // we need to use arbitrary lifetimes $fastLifetime = (int) (2592000 / (11 - $priority)); } else { - // prevent computed infinite lifetime (0) by ceil - $fastLifetime = (int) ceil($lifetime / (11 - $priority)); + $fastLifetime = (int) ($lifetime / (11 - $priority)); } - - if ($maxLifetime >= 0 && $fastLifetime > $maxLifetime) { - return $maxLifetime; + if (($maxLifetime !== null) && ($maxLifetime >= 0)) { + if ($fastLifetime > $maxLifetime) { + return $maxLifetime; + } } - return $fastLifetime; }
How to apply?
If you copy the diff code above to a file named "file.diff", you change to the directory where TwoLevels.php is located and issue this command:
patch -p0 < file.diff
Attention - your memcached version could be buggy
Some versions of memcached have bugs. I had one on a Debian system - I think it was version 1.4.2 or 1.4.5 - which did purge items before they were expired! Now we have 1.4.7 and it works fine.
Happy hacking!
0 notes
Text
How to fix unreadable Tooltips in Zend Studio in Ubuntu 11.10
Well, I just got my Ubuntu updated to 11.10 (Oneiric Ocelot). Update worked without problems. I figured out today that the current Nvidia X Server Settings are making a little trouble when switching from laptop to external monitor. Usually this need two trials but then it works. Also the energy saving options were reset so that when closing my laptop it would got to hibbernate or so.
Anyways, a more tricky problem, which I did not have to solve for the first time:
In Zend Studio (should also be true for Eclipse) all the tooltips had black text on black background, which not just looks ugly, it's also not readable with my (regular, human) eyes. In older Ubuntu versions you could change the theme colors in the system settings. In 11.10 they even removed that, you can only pick from 4 or 5 default themes and I definitely want to keep "Ambience", all the others look very ugly.
So after some research I found this comment from Christoph pointing to his posting in the Ubuntu forums showing the only solutions which currently seems to work. I had already tried to change css in the "Ambience" theme but I had tried files under "gtk-3.0" ;-)
I'm using a light yellow background with black foreground now, so my first line in the file that must be edited (/usr/share/themes/Ambiance/gtk-2.0/gtkrc) looks like:
gtk-color-scheme = "base_color:#ffffff\nfg_color:#4c4c4c\ntooltip_fg_color:#000000\nselected_bg_color:#f07746\nselected_fg_color:#FFFFFF\ntext_color:#3C3C3C\nbg_color:#F2F1F0\ntooltip_bg_color:#F5F5B5\nlink_color:#DD4814"
Hope this saves some people some time, good luck!
1 note
·
View note
Text
Online webserver monitoring solutions
Carlo just recommended those:
http://www.appfirst.com/
https://www.cloudkick.com/
Will check them out when I find the time ...
15 notes
·
View notes
Text
Flat Rate Tax für Schland - warum zur Hölle nicht?
Mal ein Posting auf Deutsch, da es ein rein deutsches Thema ist.
Ich habe gestern in der SZ von unseren Nachbarn (nicht geklaut, die sind in Urlaub ;-) ) den Artikel über den Erfolg der Flat Rate Tax - vor allem in Osteuropa - gelesen und frage mich nun, warum ein stark vereinfachtes Steuersystem in Deutschland einfach keine Chance hat!?
Jeder in Deutschland scheint unser ultra kompliziertes Steuersystem zu hassen. In 16 Ländern Osteuropas, des Baltikums sowie in Russland, Weißrussland, der Ukraine und den beiden Kanalinseln Jersey und Guernsey gibt es diese Flat Rate Tax, also nur einen Steuersatz mit wenigen Ausnahmen und dafür so gut wie keine Subventionen bzw. Möglichkeiten, seine Steuerlast durch kreatives Ausnutzen von Schlupflöchern etc. zu minden. Dort ist es ein Erfolgsmodell, das unter anderem Investoren in die Länder lockt.
Ganz anders in Deutschland, wir haben unser tolles, progressives Steuersystem mit gefühlt 99999 "Ausnahmen" und sitzen Stunden oder Tage an der Steuererklärung oder brauchen einen Steuerberater. Nutzen tut das nur denen die so reich sind, dass sie sich hauptsächlich damit beschäftigen, wie sie ja jeden Euro an Steuern sparen können. Und das geht eben am besten, wenn man nicht 50.000,- € sondern 500.000,- € aufwärts zu versteuern hat.
Nun ist es aber nicht so, dass bei einer Flat Rate Tax oder einem vereinfachten System mit beispielsweise 3 Stufen (Merz, Kirchhoff ...) zwangsweise die Reichen mehr bezahlen müssten. Was also verhindert, dass wir endlich unsere Steuererklärung auf einem Bierdeckel oder im Original von Alan Rabushka auf einer Postkarte machen können? Die Lobby der Steuerberater, der Finanzbeamten, der Anbieter von Steuererklärungssoftware, ....?
Um Antworten auf diese Frage wird gebeten!!!
Natürlich wäre es schon auch wünschenswert die Last gleich ein kleines bisschen von den Schultern der Bezieher kleiner und mittlerer Einkommen auf die starken Schultern der Großverdiener umzuschichten, damit sich Arbeit wieder lohnt, you know! ;-)
Also, liebe Regierung, endlich anpacken, dieses Thema, anstatt sich mit umfassenden Steuerentlastungen für alle im Pfennigbereich weiter lächerlich zu machen! Und wenn man schon beim Entrümpeln ist, kann man gleich das Sozialsystem, das Rentensystem und das Gesundheitssystem mit anpacken. Haben andere (siehe oben) auch geschafft, dann sollte das für Schland doch ein Klacks sein!
0 notes