Sign in to follow this  
Darkneon

[.net] GC + SQLConnection + Memory Question

Recommended Posts

Darkneon    166
Hi Guys, I know this is a stupid question to ask because I can test it myself and see the results by myself. The thing is I don't have Sql Server at home so here it goes. Situation: I have a loop that continually access a database every 60 seconds. Which approach is more effecient in terms of memory managment. 1. Create the SqlConnection object once and every minute open the connection and then close it? or 2. Every minute create the SqlConnection object, open the connection, close the connection, and dispose of the object? Thanks Darkneon

Share this post


Link to post
Share on other sites
Toolmaker    967
How long is your application suppose to run?

If it's suppose to run 24/7, it's both memory and performance wise to reuse the connection. If you keep 'new'ing a new SqlConnection object, at some point, the memory is full and the GC needs to clean up.

Your app with suffer a small slowdown during the GC collection, nothing too spectacular. However, re-using the connection object should be no problem(I'm not 100% sure, try it). But, if you re-use the connection object, you'll also skip a little bit of additional overhead from the constructor being called.

Toolmaker

Share this post


Link to post
Share on other sites
Bracket    240
I recommend re-using the connection object, either directly or through a connection pool. If you do want to create a new one, wrap it in using(cnn = new SqlConnection) { } will ensure that it is disposed of immediately when you're done with it.

On a related note, two things that have bit me occasionally: do as much as you possibly can in a single SqlConnection - and make absolutely sure you close it when you are done with it.

Building a connection is quite expensive (less so for SQL Server than some databases - for PostgreSQL, connections are really quite expensive/slow; MySQL connections are very cheap/quick), often more expensive in terms of time than a simple SELECT query (or even an INSERT/UPDATE in some cases). Batching as many commands as you can into a single connection cycle will save you some overhead. For multiple INSERT/UPDATE calls, wrapping them in a transaction and committing at once can also offer a significant speed-up (as an example, one database conversion application I worked on was more than 20-times faster batching INSERT calls into groups of 500). You may find your app performs better keeping a connection open, and just checking to see if it's still good (and reopening if it isn't) every 60-seconds.

For the latter: If you don't close the connection (or at least free it up to a connection pool) you will eventually exhaust SQL Server's connection pool - and get some very unhelpful error messages! I try to wrap whatever follows connection.Open() in a try/finally that guarantees connection.Close() will be called - even if something goes wrong because of this.

Share this post


Link to post
Share on other sites
Talonius    643
..changes shoes to SQL Server DBA..

If you were creating a new connection every 60 seconds against a database I adminned I would certainly hunt you down and have you change the behavior of the application.

The first problem (from *my* viewpoint) is that auditing the behavior of your application becomes problematic because your connection ID is changing every minute.

The second problem is that SQL Server doesn't handle multiple connections going up and down very well; in your case it sounds like you'd have one connection per minute, no problem. At my place of work we ended up having 800 connections or so going up and down. SQL Server would eventually slow down over the course of a week or so of the connections being constructed and torn down. The solution was to restart SQL Server on a regular basis. (We don't have complete control over the behavior of the client. Our case is extreme, but it does illustrate issues with connections.)

The third problem is that a new connection every 60 seconds has the probability of conflicting with the timeout of the database. If your connections are set to timeout every 180 seconds but you're creating one every 60 seconds... well, resources aren't going to be happy. (Assuming you fail to close the connection or even to create it initially.)

The fourth problem is that, really, it's better design to create one connection and reuse it. Create the connection when the application starts; dispose of the connection when your application ends. (And make sure you dispose of it! As mentioned above, a using statement will take care of this but that assumes a connection that is used and disposed immediately.) If there's no requirement for multiple connections, don't use them.

You can use SqlConnection.State to determine if the connection is still valid or if it has been broken before using it.

This is ignoring other factors such as access to the temporary database changing with each new connection (if you want to store data across each access attempt you could do so in tempdb if your connection ID remains constant) and SQL Server being able to profile your application's behavior and adjust accordingly.

Also, if you want to slow your home machine down, grab SQL Server Express 2005. (*grin*) It's the same central engine as SQL Server, just arbitrarily limited in connections and database size.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this