language translation for this site

To be able to share to a wider audience I have placed Google Translation gadget at the upper left side of these pages. [more]

I'm a frustrated linguist and I will try my best to write in a way that will likely to produce correct translation but forgive me if I don't. Hopefully soon I can understand your languages too.

It requires javascript to be enabled and after selecting the language it will redirect you to Google which will contain the translated version of the page. Thanks to Google but hopefully soon you don't have to leave the page to view with a different language.

There is also BabelFish (from Yahoo/Altavista) but I used the Google gadget as it seems simpler and supports more languages. The only limitation is that the Google version only supports English as the source language. But in this case it won't be a problem.

I'm hoping this would be a useful feature.

Suspected Trojan or Virus qxty9be.cmd

Suspected Trojan is messing up my PC at this very moment[more]

I attached my portable drive to a computer that didn't have antivirus
today. The computer was working fine (at least as it seems) but I found
a certain "autorun.inf" and "qxty9be.cmd" in that diks afterwards.

I scanned the disk and Symantec didn't see anything! However in my
attempt to learn more about it (thinking I could run the .cmd file
inside a Virtual Machine) I copied it to my local disk.

After copying I couldn't see the file (it was hidden but weirdly
enough I saw it when it was on my portable drive). So I went to Tools
> Folder Options > View > Show hidden files/folders.

That's when Auto Protect came up. It detected something! but
Auto-Protect Results froze. Stupid me tried to Scan the folder again
and Manual Scan froze again.

I didn't know copy/pasting without running could cause some unexpected behavior.

I had the same behavior on another PC earlier and it ended up to
have booting problems. If it was indeed the cause then it did something
really serious.

I couldn't find any resource about it in Google except one but no resolution whatsoever. And was posted yesterday or something.

Symantec Auto Protect results and Manual Scan window is still hanging
at this point actually. So it feels like this is a goodbye letter as I
expect something bad to happen once I restart this. (similar to what
happened earlier).

So this is just a warning and wish me luck (I have backups for sure but still things are never gonna be the same again…


Just restarted but unlike the other machine, this one survived. Thanks to the following

1) ZoneAlarm (the OSFirewall feature i think) – it prompted whether to allow the *.cmd to run or not (of course I denied it)

2) Spybot Search & Destroy – scanned and detected the Trojan as Win32.Ruju.a but only God (and the creator) knows what else it does

3) Acronis True Image backup – did a backup even during the time that Symantec was freezing (it might end up as corrupted backup but it's trivial to backup so did it anyway). Plus my previous backup gave me the confidence that worse comes to worse I have one

4) Symantec for auto protect. Not for protecting me but at least detecting the problem despite freezing. The trojan still managed to get thru. I'm certain of this cause clicking on my other drives was running the *.cmd file and Spybot affirms that the trojan made it's way. I also did a manual scan on the portable drive before this erupted and it didn't detect it. What's wrong Symantec (i have v10)

Hope that would be the last of it. Gotta get back to work


* This worm is generally transmitted via AutoRun features so it's always best to turn off that feature. GroupPolicy Editor (Start > Run > gpedit.msc> User Configuration > Administrative Templates > System > Turn Off Autoplay should be set to Enabled)

* Furthermore, here's more detailed information about the issue:

Will keep updating this as necessary. Let's make the world a safer place…

GoDaddy IIS7 DotnetNuke Joomla WordPress

UPDATE (2/2/09): GoDaddy even on deluxe/premium/unlimited hosting accounts, applications (eg. .NET application root) is configured as virtual directories. Thus the ApplicationRootPath is still rooted on the ROOT of the hosting disk space. This makes sense I think and so even if domain points to say ROOT/subfolderA, Response.Redirect("~/Page.aspx") will still point to <domain>/subfolderA/Page.aspx and it will appear as such in the browser.

Last post for Jan 2009 and just wanted to say that on top of upgrading to IIS 7, I also upgraded my hosting subscription to GoDaddy Unlimited (diskspace, traffic, databases – though there is 200MB hard limit on DB size).

Along with this is the support for PHP 5 which means I would be trying out a couple of PHP and MySQL apps[more].

My web development life actually started with a little PHP + MySQL instead of .NET but as I haven't practiced them and addicted to intellisense and drag and drops and along SQL server just came along with .NET so somehow I'll be brushing up with them a little bit. Maybe not in detail but trying out some apps like Joomla, WordPress which are popularly used.

With this expect a couple of posts on these topics.

Starting with the fact that as of this time GoDaddy DOESN'T support (or have installed) the Microsoft URL Rewriting Module for IIS 7 for some reason (at least as of this writing). This is one of the useful features (and sought of features for IIS) but when I have to rewrite stuff and I get HTTP error 500 (since rewrite is done in the system.webserver element and any error there will cause HTTP error 500) I called up their customer support and only then I knew they don't have this important feature yet. I would have known better but the PHP support for IIS 7 is still a good deal.

Another issue is how IIS7 handles plus sign (+) and spaces in the path. Reported a DNN issue on this and still working on resolution.

Thirdly I had to point domains to subfolders in my ROOT hosting account and I had problems with DNN on this. Basically DNN appends the folder name and I read around for a fix but didn't work until I combined two fixes. Will have a separate entry on this but if you need info now just feel free to drop me a message.

Next, Joomla. Test installation to a subfolder and implementing SEO friendly URLs resulted in CSS issues (links to CSS files seems to be invalid -I think something to do with /index.php/ in the paths). Yet to further investigate thisi but none for also. Without the SEO friendly URL setting though, it seems fine. Furthermore then I've solved this issue, will have to deal with the same issue with DNN mentioned above. Nothing much to say at this point except that unlike apache which has mod_rewrite and referenced in a lot of issue resolutions found online, IIS7 doesn't have that. In line with this, also tried out a couple of third party rewriters like one from ManagedFusion which uses mod_rewrite syntax and thus easier to use existing htaccess setting to resolve issues. More on this soon…

And then, WordPress. Having issues similiar to the DNN subfolder issue about. In this case, if I set the Blog URL (General settings) to just the domain name (without the subfolder) then the site works. However if you have permalink settings aside from the default (eg. using ?p=n or ?page=n) then I get HTTP 500 errors.

Did I also mention that GoDaddy restored a previous backup of my files without my knowledge?? I give them credit for coming clean and eventually confirming it (the reason is because – allegedly some of my files got lost – what the heck, all was fine until they overwrote them). This is a major2x issue and it's a good thing I have I do backup my upload files locally and I actually have a local Subversion installation. So all bold face for this. DO NOT TRUST GoDaddy (or you hosting for that matter) solely with your files, have your own backup (regardless of how your hosting claims to be reliable). This is a very serious issue and GoDaddy lost my confidence on this one. But weighing the pros and cons, I'll still retain this one but not for my mission critical apps. And besides, I just confirmed that they indeed do daily backups. If it were indeed their fault you can always ask for the latest backup restore (assuming they admit it though). I have suggested some tracking mechanism for this and worse comes to worst that they cause damage to your operation by overwriting files, I'm pretty sure you have a way to get compensated.

Moving on, NO comment on performance yet but I don't expect this to be peformant and if any of the sites I will be hosting have considerable traffic, I'm pretty sure I'll need to move to another hosting eventually (or VPS or DS – thought unlikely). Plus the 200MB DB size on each database, have to watch out for that.

But for the meantime, aside from work and other things, will be busy with setting up online communities etc. I'lll be sure to share whatever I can 😉

BlogEngine.NET migrating to IIS 7

This blog running on BlogEngine.NET (and the its parent/main site) is now running on IIS 7 Integrated Mode and would like to share a little of my experience.

There were two major issues with I migrated these blog to IIS 7 [more]

1) Server Error in / Application – Request is not available in this context when I accessed the sites

    *Of course I had customErrors ON so I had to turn it OFF before I was able to see this detail

   Interesting, since the site obviously worked fine in IIS 6 and no changes were made. But after a few google clicks I ran into this written specifically for the error. IIS7 Integrated mode: Request is not available in this context exception in Application_Start from Mike Volodarsky. Implemented the suggestion and that did it.

    Because of IIS 7 architectural changes the request context is not available in the Application Start. And since BlogEngine.NET loads extensions in the Application_Start event and the extension makes extensive use of the request context in path related code (also determining protocol et al) the error occured.

    Based on the article

    "Basically, if you happen to be accessing the request context in Application_Start, you have two choices:

   1. Change your application code to not use the request context (recommended).
   2. Move the application to Classic mode (NOT recommended).

    I chose option one otherwise I would have stayed with IIS 6. So what I did is follow his recommended solution and move the Extension loading part of BlogEngine.NET to the BeginRequest but made provisions so that it is only loaded on the first request (and once).

Copying from the article:

void Application_BeginRequest(Object source, EventArgs e)


    HttpApplication app = (HttpApplication)source;

    HttpContext context = app.Context;

    // Attempt to peform first request initialization





    private static bool s_InitializedAlready = false;

    private static Object s_lock = new Object();

    // Initialize only on the first request

    public static void Initialize(HttpContext context)


        if (s_InitializedAlready)




        lock (s_lock)


            if (s_InitializedAlready)




            // *** Perform first-request initialization here … ***

            s_InitializedAlready = true;





And that's it for the FIRST issue.


2) Site worked, but hmmm.. the styles are not being applied. Looking at the code, BlogEngine.NET uses httpHandlers to link to the stylesheets. Something like (depending on the theme name you have)

<link href="/blog/themes/BrightSide/css.axd?name=style.css" rel="stylesheet" type="text/css" />
Accessing the link directly in the browser didn't return anything (not found) while doing the same on the old hosting account (IIS 6) returned the style info succesfully. So there must be something wrong with the
handlers and modules. 

Luckily I found this IIS 7.0 Integrated Mode Configuration Changes from BE.NET forum


Go ahead and read the article, but the bottomline is : in the web.config move the module and handler configuration sections from the <system.web> to <system.webserver> and a few minor changes. Resulting in (assuming you didn't make changes in this section since you downloaded BlogEngine.NET)


        <add name="UrlRewrite" type="BlogEngine.Core.Web.HttpModules.UrlRewrite" preCondition="managedHandler" />
        <add name="ReferrerModule" type="BlogEngine.Core.Web.HttpModules.ReferrerModule" preCondition="managedHandler" />
<add name="CompressionModule"
preCondition="managedHandler" />
preCondition="managedHandler" />
        <!–The CleanPageModule below removes whitespace which makes the page load faster in IE. Enable at own risk –>
        <!–<add name="CleanPageModule" type="BlogEngine.Core.Web.HttpModules.CleanPageModule, BlogEngine.Core"/>–>

        <!–Remove the default ASP.NET modules we don't need–>
        <remove name="Profile" />
        <remove name="AnonymousIdentification" />

<add name="FileHandler" verb="*" path="file.axd"
type="BlogEngine.Core.Web.HttpHandlers.FileHandler, BlogEngine.Core"
        <add name="ImageHandler" verb="*" path="image.axd"
        <add name="SyndicationHandler" verb="*"
        <add name="SiteMap" verb="*" path="sitemap.axd" type="BlogEngine.Core.Web.HttpHandlers.SiteMap, BlogEngine.Core" />
<add name="TrackbackHandler" verb="*" path="trackback.axd"
BlogEngine.Core" />
        <add name="PingbackHandler" verb="*"
BlogEngine.Core" />
        <add name="OpenSearchHandler" verb="*"
BlogEngine.Core" />
        <add name="MetaWeblogHandler" verb="*"
BlogEngine.Core" />
        <add name="RsdHandler" verb="*"
path="rsd.axd" type="BlogEngine.Core.Web.HttpHandlers.RsdHandler,
BlogEngine.Core" />
        <add name="CssHandler" verb="*"
path="css.axd" type="BlogEngine.Core.Web.HttpHandlers.CssHandler,
BlogEngine.Core" />
        <add name="JavaScriptHandler" verb="*"
path="js.axd" type="BlogEngine.Core.Web.HttpHandlers.JavaScriptHandler,
BlogEngine.Core" />
        <add name="RatingHandler" verb="*"
path="rating.axd" type="BlogEngine.Core.Web.HttpHandlers.RatingHandler,
BlogEngine.Core" />
        <add name="OpmlHandler" verb="*"
path="opml.axd" type="BlogEngine.Core.Web.HttpHandlers.OpmlHandler,
BlogEngine.Core" />
        <add name="MonsterHandler" verb="*"
type="BlogEngine.Core.Web.HttpHandlers.MonsterHandler, BlogEngine.Core"
        <add name="BlogMLExportHandler" verb="*" path="blogml.axd"
BlogEngine.Core" />

And that did it for me 🙂


Lost Internet Access due to ZoneAlarm and Microsoft Update KB951748

I ran into this issue a while ago where I lost internet access connection after installing a windows update and turns out to be because of my ZoneAlarm installation. [more]

Update KB951748
is known to cause loss of internet access for ZoneAlarm
users on Windows XP/2000. Windows Vista users are not affected.

Impact :
Sudden loss of internet access

Platforms Affected :
ZoneAlarm Free, ZoneAlarm Pro, ZoneAlarm AntiVirus, ZoneAlarm Anti-Spyware, and ZoneAlarm Security Suite

Resolution: Basically

1) Download and Install latest version

2) ZoneAlarm Firewall Internet Zone to medium

3) Uninstall the windows update

Read the full article from here posted July 8, 2008

There are also mention in the forums about uninstalling and reinstalling but also a mention that it does expose some vulnerability in which the update was intended for.


Link: Flash can now be indexed by search engines

Flash can now be indexed by search engines…[more]

For most people on the Web, if Google or Yahoo cannot find
something, it doesn’t exist. That has been one of the biggest drawbacks
to creating a Website or application that displays itself as a Flash
(SWF) file. Search engines could see the file, but they could not see
what was in it. Until now.
has come up with a way for the search engines to read SWF files and
index all of the information they contain. That means any text or links
in a Flash application can now be indexed.

Read Full Article from TechCrunch : Once Nearly Invisible To Search Engines, Flash Files Can Now Be Found And Indexed

The Microsoft Source Code Analyzer for SQL Injection tool

Microsoft released The Microsoft Source Code Analyzer for SQL Injection tool (for ASP code) is available to find SQL injection vulnerabilities [more]

The Microsoft Source Code Analyzer for SQL Injection tool is a static
code analysis tool that helps you find SQL injection vulnerabilities in
Active Server Pages (ASP) code. This article describes how to use the
tool, the warnings that are generated by the tool, and the limitations
of the tool. See the tool Readme document for more information.

Note that this is a static source code analyzer and thus must be run in the machine (IMHO, preferably not in production – though since it analyzes source code it is non intrusive) where the source code resides.

Busby SEO Challenge World Cup

There is a "buzz" in the internet about this Busby SEO World Cup Challenge sponsored by Busby Web Solutions based in Australia. The key phrase is "BUSBY SEO CHALLENGE". No I'm not joining, read along [more]

I noticed when I was looking at one of local blog directories that this blog is listed in and the reason why I'm posting this entry is that when I looked into the Official Results showing the top entries as of a certain date I can recognize a lot of sites from the Philippines. Not that I have not previously seen/know these sites but that the domain name, url or sometimes the username for the entry does have a Filipino ring to it. From what I can see most of those entries are part or at least associated with a group of SEO, web designers/masters/developers from Capiz, a province in the Philippines.

Again no, I'm not joining nor part of the contest, nor acquainted with these entrants and obviously internet marketing is not really my specialty (at least for now) but it does feel proud to see that Filipinos make it to the list of contests like these, much more make it to some of the top slots. I'm almost certain that there are even more out there that are yet to show what they have for the world.

As a people we do have our own weaknesses but it doesn't change the fact that we are more than just a small country in this side of the world. And despite the challenges that we face we are home to some of the best and brightest and let's work more on that or at least keep it that way 🙂

** I can't point out their sites/url at this times as I didn't have much time verifying and I don't personally
know them (among other concerns) but I'd be glad to link to help any
ranking (if it does help) if asked to and I can find time to update my
pages. Good luck to all of you!

My Verisign SSL Certificate Application Experience

I do have an idea about SSL, certificates and related security concepts but in my previous works, it was someone else (client IT) who did the preparation, request and installation of SSL certificates until lately when I had to do it myself. I also had experience with trial and self signed certificates but still some things are not the same of course (including the risk of messing something up).

It's not as difficult as it sounds but want to share a few things. [more]

For those not so familiar with SSL, I would suggest google or wikipedia but since providers are working their best to get the highest rank for search engines you might not get the best explanations and the one for wikipedia seems too technical so the following article might help : What is SSL and what are certificates (used google to search for it too so there could be better ones) and here's something from Verising too: Secure Sockets Layer

With that out of the way the first step would be to generate a Certificate Signing Request (CSR) that will be used to apply to Verisign.

For CSR generation, go to the following link and select your server : Verisign : Generating a CSR request. In my case it was IIS 6.0 on Windows Server 2003.

Since the site had an existing certicate with another provider (not Verisign), I figured it wouldn't matter if I generate a renewal request (an option in IIS Server Certificates wizard) despite the new provider since it is the private key that matters and it's still in the machine so I generated the renewal request, initiated the registration, submitted the CSR (copy paste in their application wizard) and payment was succesful.

Now here's my first issue. Verisign wouldn't/can't process the application because the common name (CN) embedded/inside the CSR request does not match the company registered as the owner of the domain we are trying to secure the certificate for. Note that this was not an issue in the previous provider but Verisign is more particular on this (which is a good thing). So remember, the CN for the request must match the owner name registered for the domain. If that is not the case then they do offer the following options:

1) Update the registrant/owner information in the domain register to match the CN

2) Generate a new request with the matching CN and registrant

3) A domain authorization letter that must be signed by the domain registrant or the employee of the domain registrant (and NOT the organizational contact in the request).

So moving forward, our best option was option 2.

Now, you can't create a new request without removing the old attached certificate. Problem. But not for long because Microsoft has a work-around in the following link :  Renew/Create CSR while another certificate is still installed. Note that the title mentions Renew but how I just did it. Well if you'd notice the article it applies to IIS 5.0 and IIS 6.0 seems have been added the feature that you can renew without going thru the work-around. That is generating a renewal CSR wouldn't require that you remove the existing certificate. Since I need to create a new CSR then I did what was in the article. Had another existing unused website (note: website and not virtual directory) on that IIS so I didn't need to create a new website. Used that to generate the CSR, application to Verisign again.

Took some time and a number of follow ups before they were able to get back to us that they can't verify the technical telephone contact. That they can't find a publicly verifiable number for our client. So either:

1) A faxed copy of a recent telephone bill showing the Organization Name and telephone number

2) A notarized letter signed by the Organizational contact authorizing the technical contact to request/apply for the product/certificate.

The technical and organizational contact was the same in our case but we did send a notarized letter nevertheless. Hopeful but turned out that they won't accept the faxed copy of the letter with an embossed notary seal. They suggested to shade the seal, did resent via fax and email but no luck. The seal was local and not from the US and not so legible even with the hard copy so if they insist then created a new one.

After all hassle, finally got the request approved. Signed in to Verisign Certificate Center (you'd have the details when you registered the first time) and downloaded the keys. got the PKCS#7 certicate (with intermediate certificate authorities – CA) since it was the common one and it says unless you know what you need, use PKCS. And knowing that it had the intermediate CA information I went for it. Otherwise if you installed incorrectly without the intermediate CA when you needed to then the certificate would appear as invalid to the browser.

Saved the content of the certificate inside a *.cer file. Continued the work-around steps from the Microsoft link earlier, that is processed the pending request on the temporary (the other unused web site), certificate was installed only to be removed after wards. Note that this is the trick. The certficate was disassociated with the other web site but it was installed and the record remains in the machine for use in another website. So went the production website > properties > directory security > server certificate > replace cerficiate > find the certificate installed a while ago (take note of the serial number or the name if it's obvious), replace and finally your good to go. Verify the certificate by accessing from your browser (from another machine – not from the same server). Read the article again for the more details instructions if you're not that familiar yet.

Verisign costs more than others but I'd still go for it if I/client can afford it despite this experience. But I would highly2x suggest that if you have renew a certificate especially if moving providers, make sure you do it way earlier than the existing certificates expiration (~ a month) to cover unexpected issues.

And we're done. Gotta sleep. 🙂

Privacy in sending email to mailing list (BCC)

I think most people should know this already although I'm not quite sure about that so posting anyways.

Most often than not I receive emails being sent to mailing lists where the the individual recipients don't really know each other or even if they do they might not necessarily want the other recipients of their existing email address. I personally don't really mind disclosing my information (as having my blog and numerous online profiles would easily help you figure out my email) but there is a very high possibility that others would actually mind doing such and unless you are absolutely sure that they don't then use BCC (blind carbon copy) for those email recipients instead. [more]

Don't get me wrong, this is a free world and do as you please but your recipients will likely appreciate proper use of the BCC field and not to mention that it would minimize spam. In it's own little way make the web and the world a safer place. we need not worry about black hat hackers employing complicated techniques to look into email address databases or gain access to a server if they'd need only to compromise a few email account and if those accounts happen to be full of valid mailing lists emails, then they're out for a treat. We don't really want that do we 🙂

In the context of e-mail, blind carbon copy (abbreviated BCC and sometimes referred to as Blind Courtesy Copy)
refers to the practice of sending a message to multiple recipients in
such a way that what they receive does not contain the complete list of
Read full article about
BCC on Wikipedia