[web] Fascilitating Search

Started by
0 comments, last by 255 17 years, 12 months ago
So I'm making my own web based forum with C++. and I am using my own database system.

/root
 -/data (folder where forum contents such as name is saved)
   -/1.cgi (post #1)
   -/2.cgi (post #2)
   -/3.cgi (post #3)
 -/files
   -/1 (folder for uploaded files from thread #1)
     -/my_face.jpg ( image uploaded by some user )
and let's say I searched for the keyword "evil" and I had 1000 posts. and I'm going to start my match from the most recent files. and it happens that post# 1000, 998, 977, 934, 933 were matched, ( there are 5 matches / page ) Now I'm going to save these matches in a server temporary folder file ( can I call this a cache folder? ) so that when page 1 is requested again with the keyword evil, the computer can save time searching. Also when page 2 is requested, I can start from where I left off from page 1, I can start from post #933. Now, in order to do this, I would have to generate a random ID# to each visitor that will be passed around as a cookie while they navigate my site. Now my question is. 1) Is there a better way of doing fascilitating search in my situation, or any suggestions? 2) is my data structure efficient? ( I know it's not but why is it not? ) 3) and if it's not, how can I make it better with minimal change? Thanks for your opinions and facts :D
Advertisement
It would probably be much easier to just use some generic database like PostgreSQL or MySQL. It will save you much time and effort and most likely be just as or more efficient than a homebrewn solution. Also, don't waste too much time optimizing until you can prove to yourself that there is a notable bottleneck somewhere.

This topic is closed to new replies.

Advertisement