Monday, August 24, 2009
Recent stock market rally
I recently sold the stocks that I purchased in March 2009 for a $4k profit but obviously I sold them too early. The positive news in the media recently seems to be pushing the rally much longer so I reinvested 50% of my money back into the market. Despite picking up these stocks at a higher price point I'm hoping I can net another small profit. My picks this time are VTI, USO and C.
Thursday, August 20, 2009
Rewriting URLs with Apache mod_rewrite
I've noticed a growing trend on message boards where rewriting the URLs is becoming quite common. So instead of the URL being http://www.a-b-c.com/forums/board.pl?action=view&category=cars the URL is http://www.a-b-c.com/forums/cars or something similar. This is not only easy to remember/bookmark it allows the spiders to crawl the pages more easily and you have the added benefit of moving things on the backend without breaking the links.
This can easily be done with APache's mod_rewrite module. Make sure .htaccess is enabled in httpd.conf and mod_rewrite is installed. Then create a .htaccess file with the following contents:
RewriteEngine on
RewriteRule ^articles/([^/\.]+)/?$ /cgi-bin/displayArticle.pl?$1 [L]
Any request of the form http://www.whatever.com/articles/abc will now go to http://www.whatever.com/cgi-bin/displayArticle.pl?abc
There's a lot more to it and even more to mod_rewrite in fact there's a whole book (or two?) on mod_rewrite out there.
This can easily be done with APache's mod_rewrite module. Make sure .htaccess is enabled in httpd.conf and mod_rewrite is installed. Then create a .htaccess file with the following contents:
RewriteEngine on
RewriteRule ^articles/([^/\.]+)/?$ /cgi-bin/displayArticle.pl?$1 [L]
Any request of the form http://www.whatever.com/articles/abc will now go to http://www.whatever.com/cgi-bin/displayArticle.pl?abc
There's a lot more to it and even more to mod_rewrite in fact there's a whole book (or two?) on mod_rewrite out there.
System WPARs, quick and easy virtual machines in AIX
AIX allows you to create 'virtual' machines. Unlike LPARs which physically allocate the resources and are more involved to set up the WPARs are virtual and quick - it's similar to Solaris containers/zones.
Here's the quick and dirty:
1. Make a virtual machine with IP address 192.168.1.40 (mkwpar -n dbTestBox -N address=192.168.1.40) - watch the screen fly.
2. List it (lswpar) need more details? Try lswpar -L
3. OK it's there, let's start it (startwpar dbTestBox)
4. Let's login and change the root password (clogin dbTestBox)
In the system WPARs /usr and /opt come from the global env and are mounted read-only. The rest of the filesystem comes from /wpars dir in the global filesystem.
There you have it, you should now be able to ssh into your dbTestBox.
Here's the quick and dirty:
1. Make a virtual machine with IP address 192.168.1.40 (mkwpar -n dbTestBox -N address=192.168.1.40) - watch the screen fly.
2. List it (lswpar) need more details? Try lswpar -L
3. OK it's there, let's start it (startwpar dbTestBox)
4. Let's login and change the root password (clogin dbTestBox)
In the system WPARs /usr and /opt come from the global env and are mounted read-only. The rest of the filesystem comes from /wpars dir in the global filesystem.
There you have it, you should now be able to ssh into your dbTestBox.
Tuesday, August 18, 2009
Storage expansion with minimal budget, hmm...
A good friend of mine emailed me asking for my opinion on what he should do about his company's growing data storage needs. They're expecting 2x data and traffic growth. The concern is their existing NetApp may not be able to handle the additional load and it'll require additional disk shelves for the anticipated data growth. These are valid concerns and it's usually not rocket science to figure out. However as I dug deeper the situation was worse than I'd expected. Pretty quickly it was obvious that upgrading the filer to a bigger box is out of the question and not only that the existing filer is out of maintenance too. As if that wasn't bad enough his company doesn't want to spend the kind of money NetApp wants for maintenance renewal. Ouch!
So now he's got several issues to deal with here:
1. There's a performance concern
2. There's a need for additional storage
3. The filer's not under maintenance
4. The company has almost no budget and wants to 'take care of this' with 'creative means'
To me this is a recipe for a disaster, the company thinks they're saving money by getting rid of the maintenance and wanting to do things by 'creative means' but they don't realize they could lose way more money should they experience an outage especially in a business where uptime equates to dollars.
The only suggestion I had was to buy a couple of used disk shelves on ebay, attach it to the filer, look for third party hardware maintenance and hope that the filer perfoms well. Of course, him being a sharp guy he starts to think of how he can further protect the data and improve the uptime using rsync etc. This would be an OK approach for a startup but for a well established company this doesn't make sense to me. Sure it's all well and good until the disaster strikes. Then all the time you thought you were rsync'ing the files successfully were actually having problems and because you didn't have the bandwidth to keep a close eye on yet another home grown solution your company is now losing $$$$ while you're working round the clock to piece the data back together.
Having said that I realize sometimes you gotta take the risks and do what you gotta do but make sure all the bigwigs know exactly ALL the risks and never attempt to guarantee against data loss...stuff happens.
So now he's got several issues to deal with here:
1. There's a performance concern
2. There's a need for additional storage
3. The filer's not under maintenance
4. The company has almost no budget and wants to 'take care of this' with 'creative means'
To me this is a recipe for a disaster, the company thinks they're saving money by getting rid of the maintenance and wanting to do things by 'creative means' but they don't realize they could lose way more money should they experience an outage especially in a business where uptime equates to dollars.
The only suggestion I had was to buy a couple of used disk shelves on ebay, attach it to the filer, look for third party hardware maintenance and hope that the filer perfoms well. Of course, him being a sharp guy he starts to think of how he can further protect the data and improve the uptime using rsync etc. This would be an OK approach for a startup but for a well established company this doesn't make sense to me. Sure it's all well and good until the disaster strikes. Then all the time you thought you were rsync'ing the files successfully were actually having problems and because you didn't have the bandwidth to keep a close eye on yet another home grown solution your company is now losing $$$$ while you're working round the clock to piece the data back together.
Having said that I realize sometimes you gotta take the risks and do what you gotta do but make sure all the bigwigs know exactly ALL the risks and never attempt to guarantee against data loss...stuff happens.
Linux LVM refresher
Here's my problem, there's just too much to keep track of and remember and I just can't get it all in my 2 cell brain. I learn something new and then a few months later I completely forget stuff. Take for example the LVM in Linux between Solaris, AIX and Linux I can't keep my OS commands straight and that doesn't include apps, databases, cisco IOS, EMC, NetApp etc...
So back to LVM. It's actually quite straight forward, just a few commands is all it takes to get started.
1. You start with the physical disks (or disk slices) such as /dev/sda1 /dev/sda2 /dev/sdb1 and /dev/sdb2. You then add them to LVM as Physical Volumes (PVs) using pvcreate (and pvdisplay to view).
2. Next you are ready to create Volume Groups (VGs) on these PVs using vgcreate (and vgdisplay to view).
3. Now you can create the Logical Volumes (LVs) in the VGs using lvcreate (and lvdisplay to view).
4. Finally you're ready to create a filesystem (fsck) on the volumes.
Of course this just barely touches the subject of LVM on Linux, there are many other things you can do such as snapshots, backups/restores of metadata etc.
So back to LVM. It's actually quite straight forward, just a few commands is all it takes to get started.
1. You start with the physical disks (or disk slices) such as /dev/sda1 /dev/sda2 /dev/sdb1 and /dev/sdb2. You then add them to LVM as Physical Volumes (PVs) using pvcreate (and pvdisplay to view).
2. Next you are ready to create Volume Groups (VGs) on these PVs using vgcreate (and vgdisplay to view).
3. Now you can create the Logical Volumes (LVs) in the VGs using lvcreate (and lvdisplay to view).
4. Finally you're ready to create a filesystem (fsck) on the volumes.
Of course this just barely touches the subject of LVM on Linux, there are many other things you can do such as snapshots, backups/restores of metadata etc.
Dating site's owner makes $6M
I recently happened upon this info as I was surfing the web. Yes I'm very envious.
Apparently PlentyOfFish is an on-line dating service. It gets over 45 million visitors each month and 30+ million hits per day. What's more impressive is that it's single handidly run by its owner Markus Frind working a few hours per day and making $6 million a year from Google ads. Pretty impressive indeed. For my geek readers apparently Markus runs his website on a handful of Windows servers running IIS but the secret sauce might be the use of CDN (content delivery networks such as Akamai).
Apparently PlentyOfFish is an on-line dating service. It gets over 45 million visitors each month and 30+ million hits per day. What's more impressive is that it's single handidly run by its owner Markus Frind working a few hours per day and making $6 million a year from Google ads. Pretty impressive indeed. For my geek readers apparently Markus runs his website on a handful of Windows servers running IIS but the secret sauce might be the use of CDN (content delivery networks such as Akamai).
The beginings of CGF Blog
Computers Geekiness and Finances (CGF).
I started this blog to document some of the happenings in my life that usually take place in the areas of computers, geekiness and finances. You see I've been in the field of computers since I graduated from college and it's what I do for a day job. And geekiness? well that describes the various things that I collectively consider as geeky such as working on cars, tools, garage, metalworking etc. Finally finances because I've become more aware of my spending and saving habits.
So let's just begin shall we?
I started this blog to document some of the happenings in my life that usually take place in the areas of computers, geekiness and finances. You see I've been in the field of computers since I graduated from college and it's what I do for a day job. And geekiness? well that describes the various things that I collectively consider as geeky such as working on cars, tools, garage, metalworking etc. Finally finances because I've become more aware of my spending and saving habits.
So let's just begin shall we?
Subscribe to:
Posts (Atom)