For some time I was wondering why some sites where you access without a WWW redirects you to the one with WWW subdomain.
know most of us will agree that accessing with or without the WWW
subdomain should be supported. But in addition to this, I find the
arguments of the no-www side more appealing to me. Of course don't just
take my word and see what you think would fit you. NO-WWWW and YES-WWW and also search the net (google, live search, yahoo) for more info
So that would make my preferred URL for my site become http://ryangaraygay.com (dropping the www but if you still really want to use www, feel free to do so and you'll still access my site)
Joined infragistics forums in Feb 21 2008 and I already have 5 posts, 3
of which has been unanswered. I've never really been a fan of 3rd party
controls but had to use one for a project. Don't get me wrong, I'm
impressed with the infragistics features in typical usage but it seems
that when combined with ASP.NET ajax and ajax control toolkit it messes
up a little. Then aside from AJAX, I have this issue where binding to
the WebGrid more than once causes a whole lot of side effects.
are links to my posts for reference. If you work with these components
I suggest we take time to read all entries in the forums and be
proactive rather than go through a lot of work and only to discover
there is a known issue for it.
Button in WebTab (async=on) and UpdatePanel fires server side click twice
WebCombo.DataValue returns null on second postback (doesn't persist)
Server Side Event Handler (eg. Button click) doesn't trigger with oEvent.fullPostBack = true
Calling DataBind on a webgrid more than once (not possible?)
Is it possible to change the filter "SelectWhere" from server side code? And How?
it's been less than a month since I've used the product and I actually
don't a license installed so I don't have design time capabilities (*
license is per developer seat but even without the license the DLL can
be moved over, just no design time capabilities if you have no license
– at least that's how I understood it). But if there's any chance you
run into the same issues and I might have mentioned a work-around
please feel free to drop me a message.
I'll be updating this post should I have additional posts in the forums.
Been sometime since I've posted something. We'll be moving over to
another company but hopefully when things settle down I'll be able to
Anyways, while trying to debug a project, I tried to
set the HttpRuntime ExecutionTimeout to a certain value. When a request
reaches the server but takes more than the indicated value to finish
processing return a response to the client (not to be mistaken with the
time it takes from server to browser), it would throw an exception to
the client. I tried to test this by including a
System.Threading.Thread.Sleep(x) in my code and set the execution
timeout to a value less than x.
However for some reason, no
error/exception is thrown. After some research, it turned out that the
[compilation] -> [debug] attribute must be set to false in order for
the execution timeout to take effect
UPDATE: Just wanted to point out that it's likely that this issue was after installing .NET Framework 2.0 Service Pack 1 (SP1)
I just updated an entry in the technet forums regarding a previous
issue I've encountered before. I was able to solve my issue by
reinstalling SQL Server (which is not a very brilliant solution) but
since I only have the issue in my development machine, reinstalling was
an option and worked for me.
The collapse/expand all project and the open containing folder (plus
other features) would save time/effort so check out this addin from
Gaston Milano. I've tried this out myself for sometime already and
didn't get into any issue so go give it a try.
There are a number of times that we need to test just the content of
the email we send from code. (eg. text and formatting is correct)
without actually needing to send the email.
You can dot this by setting the smtp client's DeliveryMethod property to
Of course this assumes that you use System.Net.Mail.StmpClient. Also,
you'd need IIS installed BUT it's NOT necessarily running/starting.
knowing this, I previously had to use setup my IIS SMTP virtual server
or use a valid stmp host (will fail if the host is invalid) and wait
for the message to arrive at my inbox (which at times takes forever
without me whether there is something wrong with my code or the host).
Furthermore, you free yourself from risk of spamming other people (especially clients) or even yourself.
And of course for mail concerns, the link has always been a great (if not the greatest) resource
"Procedure expects parameter '@statement' of type 'ntext/nchar/nvarchar'."
A few days ago, I ran into the error mentioned above. sp_executeSQL requires that the first two parameters (statement and parameter declaration) be of type ntext, nchar or nvarchar
When I ran into this error, I copied the error message and googled (so much for the "googler stereotype") right away. Only to later realize that if I had read the error message well, I should have easily determine the cause. I was passing an argument to the stored procedure so I thought it was weird that it was still looking for one. I thought it was one of those weird errors and jumped right into that conclusion. I do pay attention to details but there are just those days that you still fail to do so. So just a debugging reminder, read the error message before you google.
*** sp_executeSQL is a stored procedure which is best used for dynamic SQL queries. One of it's popular use is to prevent SQL injection. I'm thinking of posting an entry on sql injection but there are a lot of articles out there that likely explains more clearly and interesting that my explanation would be so probably next time. Know though that stored procedures doesn't guarantee full protection from SQL injection especially if you still concatenate inside your stored procedures. And for that (plus other cases where you really need dynamic queries) have a look at sp_executeSQL.
I've enjoyed reading (and I think most of you two) this one from Scott Hanselman so posting it here. http://www.hanselman.com/blog/BeyondElvisEinsteinAndMortNewProgrammingStereotypesForWeb20.aspx
the funny descriptions it makes us think where we fit in and somehow
understand why that guy in the corner of the office behaves the way he
does and yes one of the reasons why I decided I would blog (i will post
the other reasons next time).
I was trying to test multiple connections to my local IIS server when I
got the http status/error code above. This is because there is a limit
to the number of concurrent connections with Keep-Alive Enabled setting
So I unchecked "HTTP Keep-Alive enabled" and the issue was resolved.
when I run/debug from Visual Studio (for an application configured to
run on IIS) i got an "Unable to start debugging on the web server. An
authentication error ocurred while communicating with the web
server…". IMHO, the error messages doesn't quite help in debugging
but it turns out that Visual Studio just needs the the keep-alive
setting ON to work for applications configured to run on IIS. Turned it
back on and worked fine.
Regardless of privacy settings, ZoneAlarm will always rejects cookies
when using localhost. Something that is not very developer friendly.
The work-around is to use 127.0.0.1 instead.
you are using Visual Studio, you can set whether to use IIS and have
the starting url to use 127.0.0.1. At least that how it works for
ASP.NET 2.0 Web Application Projects; there might be small differences
for Web Site Projects and older ASP.NET 1.1/VS2003; I couldn't verify
for other project types for now but just post to post on this issue.
Here's the thread where this information was taken from