language translation for this site

To be able to share to a wider audience I have placed Google Translation gadget at the upper left side of these pages. [more]

I'm a frustrated linguist and I will try my best to write in a way that will likely to produce correct translation but forgive me if I don't. Hopefully soon I can understand your languages too.

It requires javascript to be enabled and after selecting the language it will redirect you to Google which will contain the translated version of the page. Thanks to Google but hopefully soon you don't have to leave the page to view with a different language.

There is also BabelFish (from Yahoo/Altavista) but I used the Google gadget as it seems simpler and supports more languages. The only limitation is that the Google version only supports English as the source language. But in this case it won't be a problem.

I'm hoping this would be a useful feature.

Suspected Trojan or Virus qxty9be.cmd

Suspected Trojan is messing up my PC at this very moment[more]

I attached my portable drive to a computer that didn't have antivirus
today. The computer was working fine (at least as it seems) but I found
a certain "autorun.inf" and "qxty9be.cmd" in that diks afterwards.

I scanned the disk and Symantec didn't see anything! However in my
attempt to learn more about it (thinking I could run the .cmd file
inside a Virtual Machine) I copied it to my local disk.

After copying I couldn't see the file (it was hidden but weirdly
enough I saw it when it was on my portable drive). So I went to Tools
> Folder Options > View > Show hidden files/folders.

That's when Auto Protect came up. It detected something! but
Auto-Protect Results froze. Stupid me tried to Scan the folder again
and Manual Scan froze again.

I didn't know copy/pasting without running could cause some unexpected behavior.

I had the same behavior on another PC earlier and it ended up to
have booting problems. If it was indeed the cause then it did something
really serious.

I couldn't find any resource about it in Google except one but no resolution whatsoever. And was posted yesterday or something.

Symantec Auto Protect results and Manual Scan window is still hanging
at this point actually. So it feels like this is a goodbye letter as I
expect something bad to happen once I restart this. (similar to what
happened earlier).

So this is just a warning and wish me luck (I have backups for sure but still things are never gonna be the same again…

UPDATE

Just restarted but unlike the other machine, this one survived. Thanks to the following

1) ZoneAlarm (the OSFirewall feature i think) – it prompted whether to allow the *.cmd to run or not (of course I denied it)

2) Spybot Search & Destroy – scanned and detected the Trojan as Win32.Ruju.a but only God (and the creator) knows what else it does

3) Acronis True Image backup – did a backup even during the time that Symantec was freezing (it might end up as corrupted backup but it's trivial to backup so did it anyway). Plus my previous backup gave me the confidence that worse comes to worse I have one

4) Symantec for auto protect. Not for protecting me but at least detecting the problem despite freezing. The trojan still managed to get thru. I'm certain of this cause clicking on my other drives was running the *.cmd file and Spybot affirms that the trojan made it's way. I also did a manual scan on the portable drive before this erupted and it didn't detect it. What's wrong Symantec (i have v10)

Hope that would be the last of it. Gotta get back to work

UPDATE 2

* This worm is generally transmitted via AutoRun features so it's always best to turn off that feature. GroupPolicy Editor (Start > Run > gpedit.msc> User Configuration > Administrative Templates > System > Turn Off Autoplay should be set to Enabled)

* Furthermore, here's more detailed information about the issue: http://www.threatexpert.com/report.aspx?md5=e24a0458c2ef5333b06be67c7ea47b95

Will keep updating this as necessary. Let's make the world a safer place…

GoDaddy IIS7 DotnetNuke Joomla WordPress

UPDATE (2/2/09): GoDaddy even on deluxe/premium/unlimited hosting accounts, applications (eg. .NET application root) is configured as virtual directories. Thus the ApplicationRootPath is still rooted on the ROOT of the hosting disk space. This makes sense I think and so even if domain points to say ROOT/subfolderA, Response.Redirect("~/Page.aspx") will still point to <domain>/subfolderA/Page.aspx and it will appear as such in the browser.

Last post for Jan 2009 and just wanted to say that on top of upgrading to IIS 7, I also upgraded my hosting subscription to GoDaddy Unlimited (diskspace, traffic, databases – though there is 200MB hard limit on DB size).

Along with this is the support for PHP 5 which means I would be trying out a couple of PHP and MySQL apps[more].

My web development life actually started with a little PHP + MySQL instead of .NET but as I haven't practiced them and addicted to intellisense and drag and drops and along SQL server just came along with .NET so somehow I'll be brushing up with them a little bit. Maybe not in detail but trying out some apps like Joomla, WordPress which are popularly used.

With this expect a couple of posts on these topics.

Starting with the fact that as of this time GoDaddy DOESN'T support (or have installed) the Microsoft URL Rewriting Module for IIS 7 for some reason (at least as of this writing). This is one of the useful features (and sought of features for IIS) but when I have to rewrite stuff and I get HTTP error 500 (since rewrite is done in the system.webserver element and any error there will cause HTTP error 500) I called up their customer support and only then I knew they don't have this important feature yet. I would have known better but the PHP support for IIS 7 is still a good deal.

Another issue is how IIS7 handles plus sign (+) and spaces in the path. Reported a DNN issue on this and still working on resolution.

Thirdly I had to point domains to subfolders in my ROOT hosting account and I had problems with DNN on this. Basically DNN appends the folder name and I read around for a fix but didn't work until I combined two fixes. Will have a separate entry on this but if you need info now just feel free to drop me a message.

Next, Joomla. Test installation to a subfolder and implementing SEO friendly URLs resulted in CSS issues (links to CSS files seems to be invalid -I think something to do with /index.php/ in the paths). Yet to further investigate thisi but none for also. Without the SEO friendly URL setting though, it seems fine. Furthermore then I've solved this issue, will have to deal with the same issue with DNN mentioned above. Nothing much to say at this point except that unlike apache which has mod_rewrite and referenced in a lot of issue resolutions found online, IIS7 doesn't have that. In line with this, also tried out a couple of third party rewriters like one from ManagedFusion which uses mod_rewrite syntax and thus easier to use existing htaccess setting to resolve issues. More on this soon…

And then, WordPress. Having issues similiar to the DNN subfolder issue about. In this case, if I set the Blog URL (General settings) to just the domain name (without the subfolder) then the site works. However if you have permalink settings aside from the default (eg. using ?p=n or ?page=n) then I get HTTP 500 errors.

Did I also mention that GoDaddy restored a previous backup of my files without my knowledge?? I give them credit for coming clean and eventually confirming it (the reason is because – allegedly some of my files got lost – what the heck, all was fine until they overwrote them). This is a major2x issue and it's a good thing I have I do backup my upload files locally and I actually have a local Subversion installation. So all bold face for this. DO NOT TRUST GoDaddy (or you hosting for that matter) solely with your files, have your own backup (regardless of how your hosting claims to be reliable). This is a very serious issue and GoDaddy lost my confidence on this one. But weighing the pros and cons, I'll still retain this one but not for my mission critical apps. And besides, I just confirmed that they indeed do daily backups. If it were indeed their fault you can always ask for the latest backup restore (assuming they admit it though). I have suggested some tracking mechanism for this and worse comes to worst that they cause damage to your operation by overwriting files, I'm pretty sure you have a way to get compensated.

Moving on, NO comment on performance yet but I don't expect this to be peformant and if any of the sites I will be hosting have considerable traffic, I'm pretty sure I'll need to move to another hosting eventually (or VPS or DS – thought unlikely). Plus the 200MB DB size on each database, have to watch out for that.

But for the meantime, aside from work and other things, will be busy with setting up online communities etc. I'lll be sure to share whatever I can 😉

BlogEngine.NET migrating to IIS 7

This blog running on BlogEngine.NET (and the its parent/main site) is now running on IIS 7 Integrated Mode and would like to share a little of my experience.

There were two major issues with I migrated these blog to IIS 7 [more]

1) Server Error in / Application – Request is not available in this context when I accessed the sites

    *Of course I had customErrors ON so I had to turn it OFF before I was able to see this detail

   Interesting, since the site obviously worked fine in IIS 6 and no changes were made. But after a few google clicks I ran into this written specifically for the error. IIS7 Integrated mode: Request is not available in this context exception in Application_Start from Mike Volodarsky. Implemented the suggestion and that did it.

    Because of IIS 7 architectural changes the request context is not available in the Application Start. And since BlogEngine.NET loads extensions in the Application_Start event and the extension makes extensive use of the request context in path related code (also determining protocol et al) the error occured.

    Based on the article

    "Basically, if you happen to be accessing the request context in Application_Start, you have two choices:

   1. Change your application code to not use the request context (recommended).
   2. Move the application to Classic mode (NOT recommended).
"

    I chose option one otherwise I would have stayed with IIS 6. So what I did is follow his recommended solution and move the Extension loading part of BlogEngine.NET to the BeginRequest but made provisions so that it is only loaded on the first request (and once).

Copying from the article:

void Application_BeginRequest(Object source, EventArgs e)

{

    HttpApplication app = (HttpApplication)source;

    HttpContext context = app.Context;

    // Attempt to peform first request initialization

    FirstRequestInitialization.Initialize(context);

}


class
FirstRequestInitialization

{

    private static bool s_InitializedAlready = false;

    private static Object s_lock = new Object();

    // Initialize only on the first request

    public static void Initialize(HttpContext context)

    {

        if (s_InitializedAlready)

        {

            return;

        }

        lock (s_lock)

        {

            if (s_InitializedAlready)

            {

                return;

            }

            // *** Perform first-request initialization here … ***

            s_InitializedAlready = true;

        }

    }

}

 

And that's it for the FIRST issue.

 

2) Site worked, but hmmm.. the styles are not being applied. Looking at the code, BlogEngine.NET uses httpHandlers to link to the stylesheets. Something like (depending on the theme name you have)

<link href="/blog/themes/BrightSide/css.axd?name=style.css" rel="stylesheet" type="text/css" />
Accessing the link directly in the browser didn't return anything (not found) while doing the same on the old hosting account (IIS 6) returned the style info succesfully. So there must be something wrong with the
handlers and modules. 

Luckily I found this IIS 7.0 Integrated Mode Configuration Changes from BE.NET forum

 

Go ahead and read the article, but the bottomline is : in the web.config move the module and handler configuration sections from the <system.web> to <system.webserver> and a few minor changes. Resulting in (assuming you didn't make changes in this section since you downloaded BlogEngine.NET)

 

<system.webServer>
     <modules>
        <add name="UrlRewrite" type="BlogEngine.Core.Web.HttpModules.UrlRewrite" preCondition="managedHandler" />
        <add name="ReferrerModule" type="BlogEngine.Core.Web.HttpModules.ReferrerModule" preCondition="managedHandler" />
       
<add name="CompressionModule"
type="BlogEngine.Core.Web.HttpModules.CompressionModule"
preCondition="managedHandler" />
        <add
name="WwwSubDomainModule"
type="BlogEngine.Core.Web.HttpModules.WwwSubDomainModule"
preCondition="managedHandler" />
        <!–The CleanPageModule below removes whitespace which makes the page load faster in IE. Enable at own risk –>
        <!–<add name="CleanPageModule" type="BlogEngine.Core.Web.HttpModules.CleanPageModule, BlogEngine.Core"/>–>

        <!–Remove the default ASP.NET modules we don't need–>
        <remove name="Profile" />
        <remove name="AnonymousIdentification" />
    </modules>

    <handlers>
       
<add name="FileHandler" verb="*" path="file.axd"
type="BlogEngine.Core.Web.HttpHandlers.FileHandler, BlogEngine.Core"
/>
        <add name="ImageHandler" verb="*" path="image.axd"
type="BlogEngine.Core.Web.HttpHandlers.ImageHandler,
BlogEngine.Core"/>
        <add name="SyndicationHandler" verb="*"
path="syndication.axd"
type="BlogEngine.Core.Web.HttpHandlers.SyndicationHandler,
BlogEngine.Core"/>
        <add name="SiteMap" verb="*" path="sitemap.axd" type="BlogEngine.Core.Web.HttpHandlers.SiteMap, BlogEngine.Core" />
       
<add name="TrackbackHandler" verb="*" path="trackback.axd"
type="BlogEngine.Core.Web.HttpHandlers.TrackbackHandler,
BlogEngine.Core" />
        <add name="PingbackHandler" verb="*"
path="pingback.axd"
type="BlogEngine.Core.Web.HttpHandlers.PingbackHandler,
BlogEngine.Core" />
        <add name="OpenSearchHandler" verb="*"
path="opensearch.axd"
type="BlogEngine.Core.Web.HttpHandlers.OpenSearchHandler,
BlogEngine.Core" />
        <add name="MetaWeblogHandler" verb="*"
path="metaweblog.axd"
type="BlogEngine.Core.API.MetaWeblog.MetaWeblogHandler,
BlogEngine.Core" />
        <add name="RsdHandler" verb="*"
path="rsd.axd" type="BlogEngine.Core.Web.HttpHandlers.RsdHandler,
BlogEngine.Core" />
        <add name="CssHandler" verb="*"
path="css.axd" type="BlogEngine.Core.Web.HttpHandlers.CssHandler,
BlogEngine.Core" />
        <add name="JavaScriptHandler" verb="*"
path="js.axd" type="BlogEngine.Core.Web.HttpHandlers.JavaScriptHandler,
BlogEngine.Core" />
        <add name="RatingHandler" verb="*"
path="rating.axd" type="BlogEngine.Core.Web.HttpHandlers.RatingHandler,
BlogEngine.Core" />
        <add name="OpmlHandler" verb="*"
path="opml.axd" type="BlogEngine.Core.Web.HttpHandlers.OpmlHandler,
BlogEngine.Core" />
        <add name="MonsterHandler" verb="*"
path="monster.axd"
type="BlogEngine.Core.Web.HttpHandlers.MonsterHandler, BlogEngine.Core"
/>
        <add name="BlogMLExportHandler" verb="*" path="blogml.axd"
type="BlogEngine.Core.Web.HttpHandlers.BlogMLExportHandler,
BlogEngine.Core" />
    </handlers>
</system.webServer>

And that did it for me 🙂

 

remove malicious script tags from file

Here's a small Windows Forms application that I created to automate removal of malicious SCRIPT tags inserted into some web files. [more] (or in general – even non malicious scripts).

Of course, you can always do this manually but if we're talking of hundreds or thousands of files, it will be one heck of a job.

The idea is to:

1) retrieve list of all script tags in all files in a given folder (including subfolders)

2) list scripts found

3) select the scripts to remove – ALSO, if the script contains line break, select it then click on the [View Script Detail] button. Also note that the checkedListBox is not set to check on click

4) set a folder to save the "cleaned" file

5) then process (remove the selected scripts and they will be saved on the Target Folder – retaining their folder hierarchy)

That's it

Here's a glimpse at the "core" code for the application. Note that I employed recursion inside of the faster, better performing stack approach for simplicity.

The complete source code can be downloaded below. Along with the output (executable).

** Search a root folder (and subfolder and files) for script tags (and their contents ofcourse)

   70 // recursive

   71         private void SearchFolder(string newRootFolder)

   72         {

   73             DirectoryInfo rootDir = new DirectoryInfo(newRootFolder);

   74             foreach (FileInfo fi in rootDir.GetFiles())

   75             {

   76                 SearchFile(fi);

   77             }

   78 

   79             foreach (DirectoryInfo di in rootDir.GetDirectories())

   80             {

   81                 SearchFolder(di.FullName);

   82             }

   83         }

   84 

   85         private void SearchFile(FileInfo fi)

   86         {

   87             using (StreamReader sr = new StreamReader(fi.FullName))

   88             {

   89                 string fileContent = sr.ReadToEnd();

   90                 MatchCollection ms =

   91                     Regex.Matches(

   92                         fileContent,

   93                         @"<script([^>]*)>.*?</script>",

   94                         RegexOptions.Singleline); // handle line breaks inside script tags

   95 

   96                 foreach (Match m in ms)

   97                 {

   98                     if (checkedListBox1.Items.Contains(m.Value))

   99                         continue;

  100 

  101                     checkedListBox1.Items.Add(m.Value);

  102                 }

  103             }

  104         }

** Process a root folder (and subfolder and files), check if a script marked as to be removed is found, replace it with empty string (effectively removing it) then save the file on the Target Folder.

  105 

  106         // recursive

  107         private void ProcessFolder(string newRootFolder)

  108         {

  109             DirectoryInfo rootDir = new DirectoryInfo(newRootFolder);

  110             foreach (FileInfo fi in rootDir.GetFiles())

  111             {

  112                 ProcessFile(fi);

  113             }

  114 

  115             foreach (DirectoryInfo di in rootDir.GetDirectories())

  116             {

  117                 ProcessFolder(di.FullName);

  118             }

  119         }

  120 

  121         private void ProcessFile(FileInfo fi)

  122         {

  123             string path = fi.FullName;

  124             using (StreamReader sr = new StreamReader(path))

  125             {

  126                 string fileContent = sr.ReadToEnd();

  127                 StringBuilder sb = new StringBuilder(fileContent);

  128                 int origLength = sb.Length;

  129                 foreach (string stringToRemove in selectedScripts)

  130                 {

  131                     sb.Replace(stringToRemove, String.Empty);

  132                 }

  133 

  134                 if (sb.Length != origLength)

  135                 {

  136                     string newFilePath = path.Replace(textBox1.Text, textBox2.Text);

  137                     string newFileDirectory = Path.GetDirectoryName(newFilePath);

  138                     if (!Directory.Exists(newFileDirectory))

  139                     {

  140                         Directory.CreateDirectory(newFileDirectory);

  141                     }

  142 

  143                     string newFileContent = sb.ToString();

  144                     using (StreamWriter sw = File.CreateText(newFilePath))

  145                     {

  146                         sw.Write(newFileContent);

  147                     }

  148                 }

  149             }

  150         }

Files for Download:

ScriptRemover_Executable.zip (11.11 kb)

 

ScriptRemover_Source.zip (10.57 kb)

Hope this helps in one way or another and as usual, feel free to make comments/corrections. This has been haphazardly made but tried my best to make it useful and working.

 

*** Note that this has some known limitations (due to the regex expression used):

1) script tags has spaces like <script>abc</script > (note that the end script tag has a script before >)

2) self closing script tags <script src="url" />

as there was no need for me to handle these cases, however should you need to handle them, feel free to drop me a message and I'll try to help out.

By the way, Happy 2009 everyone!

WindowsPrincipal.IsInRole doesn’t reflect changes until restart

Just an observation sometime ago that if you create a new Windows Role and add a user to it and create a WindowsPrincipal using that user, the IsInRole method doesn't reflect the membership change made until a restart is made. [more]

For example, given the code below (Console Application project)

using System;

using System.Collections.Generic;

using System.Text;

using System.Security.Principal;

 

namespace WindowsPrincipalTest

{

    class Program

    {

        static void Main(string[] args)

        {

            WindowsIdentity ident = WindowsIdentity.GetCurrent();

            WindowsPrincipal principal = new WindowsPrincipal(ident);

            Console.WriteLine("IsAdmin = " + principal.IsInRole(WindowsBuiltInRole.Administrator));

            Console.WriteLine("IsCustomRole = " + principal.IsInRole("CustomRole"));

            Console.ReadKey();

        }

    }

}

assuming that you have no CustomRole when executing this code for the first time you see the following output

Then create a role named "CustomRole" (if not yet present) then add yourself (or the user which you will use to execute the sample code) as a member of that role.

I usually do this using ComputerManagement MMC (Start > Settings > Control Panel > Administrative Tools OR Start > Run > compmgmt.msc > OK) > System Tools > Local Users / Groups node.

After which, execute the code/application again and you should see the same output as above, IsCustomRole should still be false.

And you should notice that unless you restart your computer the membership change will not be reflected. (** just a reminder to make sure you save documents before restarting)

Address1 vs Address2

Update: I had a UPS package returned to sender (side note: they suck and given a choice I will not use UPS ever). I would recommend putting PMB (private mailbox number) in Address1 instead of Address2.

What really is the difference between Address1 and Address2 and is it significant?

It depends on the country but generally (e.g. US)

* Address1 is expected to have Street Number, Street Name, or maybe PO Box or PMB (Private Mail Box)

* Address2 refers to Apartment, Floor, Suite, Bldg #

NO, Address2 is not asking for a secondary address or a backup of whatever you put in Address1. Neither is it a “confirm” address field. Nor it is simply a “continuation” of Address1

And also, if there is no company field Address2 will be a better place to write the information on than the Address field. Seems to differ by carrier, I suggest putting in Address1 for UPS.

Safe to say that Address1 should be general (but enough to pinpoint a geographical location – and obviously not include City, State, ZIP, Country) while other extra information should be in Address2.

Address2 fields in forms are generally (and for usability) shorter and in my opinion should have at least include hints as to what goes to it.

Also, for US ZIP code lookup you might find the following link from USPS helpful : USPS ZIP Code Lookup

caution in dropping a temp table before creating it

Recently I ran into a script instead a stored procedure

IF OBJECT_ID(tempdb..#temp1) DROP TABLE #temp1

Basically, the object of this script is to check if #temp1 (regular temporary table) exists. If so drop it.
However, I think it can have unintended consequences and maybe safer not to include. [more]

Say you have a script that includes the call to the stored procedure (eg. SampleStoredProc)
If the script (let’s call this “caller”) creates a table #temp1 and at the top of SampleStoredProc you have if object_id(tempdb..#temp1) drop table #temp1, what will happen is that the #temp1 table of the “caller” will be dropped.
And the caller might not want that (or won’t expect that the #temp1 table he/she created will be dropped). It is possible that after calling SampleStoredProc the caller would still want to use/access #temp1.
On the other hand if no drop table #temp1 is executed inside SampleStoredProc and a create table #temp1 is made even if the “caller” has a #temp1 already it will not be a problem. The #temp1 of the caller and #temp1 of BehavClusDOM1 will be separately identified.
Since BehavClusDOM1 is a stored procedure it is a scope for temp tables and safe to assume that at the start of the stored procedure no temp tables are present in that scope.
Basically the idea is that BehavClus should not touch whatever is beyond its scope.

So in my opinion the inclusion of this code can cause unexpected behavior to the caller while removing it poses no risk not to mention shortening the code and decreasing complexity and readability. The author of the stored procedure (eg. SampleStoredProc) involving creation of #temp1 should know when it is present and shouldn't worry about clashing with another #temp1 in another session. You can explicitly DROP TABLE #temp1 if you want but only after you have created your own #temp1 so your sure that you'd be dropping the one you created and not that in other scripts.

temp table (#), global temp tables (##) and @table variables

I've been working "full-time" on TSQL scripts for the past month (no with .NET windows/web apps) and mostly on optimization. And I feel that I should share with everyone this article about temp tables and table variables and some of my own notes. Go read the article below then you may come back here. Take careful note of the Conclusion part at the end of the article. [more]

Should I use a #temp table or a @table variable?

Things to generally consider are speed, disk space and cpu utilization, persistence. And here are some short hopefully helpful notes.

1) persistence – if you need the data to persist even after the execution then no doubt you need to use permanent tables.

2) cpu utilization – always review indexes. effect of the use of regular temp, global temp tables, table variables to performance is not as significant as a missing index. Among other things, it is safe to say, always ensure you have a primary key. and also if you will perform queries with ORDER BY then always consider your clustered indexes are correct. you can use estimated or actual execution plans to analyze your queries. I recommend SQL profiler too but correct me if I'm wrong but they are only able to profile permanent tables.

3) speed – first thing, indexes again too (see above). Secondly and very important – Although table variables may seem (and actually common) faster than temp tables, my observation is that if you are dealing with large datasets then temp tables are way way faster than table variables. I could not quantify giving the dataset I'm working on but suffice to say that it took quite a number of folds faster that I cancelled the execution.

4) disk space – temp tables, global temp tables and table variables take up the same space as permanent tables would. But they should be cleared once the procedure/function goes out of scope. The tempdb transaction log is less impacted than with #temp tables;
table variable log activity is truncated immediately, while #temp table
log activity persists until the log hits a checkpoint, is manually
truncated, or when the server restarts