Jump to content
  • Advertisement
Sign in to follow this  
Fuzztrek

[web] MySQL - Triple join help

This topic is 4807 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all, I'm quite the newbie with SQl but so far I've been getting along pretty well with it. However, what I can't figure out is how to select (or in my case, delete) multiple rows from multiple tables if one or more of those tables do not meet the requirements. Perhaps that was a little cryptic. Here's what I have so far: SELECT abilities.name, rel_ability_keyword.type, rel_ability_user.user_id, rel_ability_asset_type.asset_type_id FROM abilities LEFT JOIN rel_ability_keyword USING (ability_id) LEFT JOIN rel_ability_user USING (ability_id) LEFT JOIN rel_ability_asset_type USING (ability_id) WHERE abilities.ability_id = 2 So basically I want to delete the ability which will always exist. The ability table is related to the keywords, users, and asset_types tables. These three relation tables may have any number of rows that need to be deleted - but here's where my join gets screwed up: the tables may not have any rows that need to be deleted at all. So, if the keyword relation table has nothing to be deleted, I can't delete anything from the user or asset_type relation either, even though they most likely have something that needs to be deleted. Is there anyway I can apply 3 joins independent of each other to a single table? I've tried googling but I don't really know what I want, hah. I'm running MySQL 4.0.22 by the way.

Share this post


Link to post
Share on other sites
Advertisement
Upgrade to MySQL 4.1, use InnoDB, and you can easily solve the problem with foreign keys.

Share this post


Link to post
Share on other sites
Cascade deletion using primary/foreign keys is probably the best way, but why not just do it with multiple delete statements programmatically in your code? This IMHO is the easiest way. If you're application is not going to just run on MySQL then you might have some SQL syntax issues using foreign keys.

Share this post


Link to post
Share on other sites
If you go with technomancer's plan, don't forget to lock the appropriate tables during the multiple deletes. Otherwise another query might be getting some weird info in between the deletes.

Share this post


Link to post
Share on other sites
Quote:
Original post by DaBono
If you go with technomancer's plan, don't forget to lock the appropriate tables during the multiple deletes. Otherwise another query might be getting some weird info in between the deletes.


Yeah this is a very good idea, but doing this sort of DB level table locking will then mean you have to start dealing with possible deadlock situations, this can get messy ... good luck :)

Share this post


Link to post
Share on other sites
Quote:
Original post by technomancer
Cascade deletion using primary/foreign keys is probably the best way, but why not just do it with multiple delete statements programmatically in your code? This IMHO is the easiest way. If you're application is not going to just run on MySQL then you might have some SQL syntax issues using foreign keys.


I thought of this, but wasn't sure if I could make it all in one query or not. This will actually make more sense as far as my code goes. Here's another question I have: It seems like everywhere I look it's recommended that I limit the number of queries I send. Does this mean that every query is taxing, or just those which are complicated queries? i.e. is sending 4 queries to actually do work (delete entries) going to be as "inefficient" as, say, issuing a query twice, one with SELECT COUNT for paged results?

From experiences elsewhere, I realize worrying about performance is (generally) something that should not be done until a problem rears it's ugly head, but it seems to be such a significant thing with databases.

Share this post


Link to post
Share on other sites
The recommendation of limiting the queries sent to the server is to reduce the network connections made, thus reducing the network load on the server. Depending on the connection method/API you're using to the database you maybe able to use batching of SQL commands. That is you send 4 queries to the server in one server request and receive the results as 4 separate result sets but only use the one connection.

My guess is the actual clients themselves won't be sending the SQL requests. these requests really should be coming from a server backend. This not only reduces the client/server traffic but also stops any SQL injection attacks from potentially happening. The raw data can then be sent to the client from the server backend as a simple data structure rather than a result set direct from the database.


Share this post


Link to post
Share on other sites
Quote:
Original post by technomancerMy guess is the actual clients themselves won't be sending the SQL requests. these requests really should be coming from a server backend. This not only reduces the client/server traffic but also stops any SQL injection attacks from potentially happening. The raw data can then be sent to the client from the server backend as a simple data structure rather than a result set direct from the database.
Yes, I think this is what's happening with what I'm doing. I'm calling from PHP, so that would all be on the server...

Share this post


Link to post
Share on other sites
Quote:
Original post by DaBono
If you go with technomancer's plan, don't forget to lock the appropriate tables during the multiple deletes. Otherwise another query might be getting some weird info in between the deletes.
So far as I've seen, a multiple-table delete won't be executed until an appropriate lock has been made on all tables being deleted from. For MyISAM, this means that you have to wait until all tables are available (that is, until any currently executing SELECTS [ which may be concurrent ], INSERTS, UPDATES, or DELETES have finished). Normally, this is a fast process, but I'd recommend avoiding it if you can. You might also want to try the LOW_PRIORITY modifier to prevent lock up scenerios.

This is less of an issue if you're dealing with InnoDB, which does row level locking, thereby allowing for concurrent selects and modification, but if you're using InnoDB you should just be using the foreign key constraints to make life a lot simpler anyway.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!