Home » RDBMS Server » Server Administration » Very Very Urgent...
Very Very Urgent... [message #54124] Wed, 30 October 2002 21:30 Go to next message
sai sreenivas jeedigunta
Messages: 370
Registered: November 2001
Senior Member
Hi all,
We have a product ,consisiting on some tables ..
Now we want to Standardize the speed issue.. so we are planning to keep a Limit of 1000 records in the table..
Once 1000 Records exceeds...the initial data need to be archived ...
Please suggest me some archiving solutions..
if not please suggest me the possibe links...

Thanks and Regards
SAI
Re: Very Very Urgent... [message #54126 is a reply to message #54124] Wed, 30 October 2002 23:23 Go to previous messageGo to next message
Trifon Anguelov
Messages: 514
Registered: June 2002
Senior Member
Create a ON-INSERT trigger on this table, and when the 1000 records limit is reached, copy the data to temporary table, which then export to OS dump file.

Archive the file with your backup software.

Hope that helps,

clio_usa
OCP - DBA

Visit our Web site

Re: Very Very Urgent... [message #54137 is a reply to message #54124] Thu, 31 October 2002 09:54 Go to previous messageGo to next message
andrew again
Messages: 2577
Registered: March 2000
Senior Member
Keeping 1000 records sounds a bit arbitrary. It would be more common to say that all records older than x days or whatever should be archived. Also you need to question the motivation for the functionality - is it to keep a copy of data or is it to keep the number of records in your table low for performance reasons. If the latter is that case then I'd seriously question that because insert/delete/query speed has virtually nothing to do with the number of records in a table.
Kind Attention Andrew... [message #54139 is a reply to message #54124] Thu, 31 October 2002 10:59 Go to previous messageGo to next message
vinay
Messages: 27
Registered: December 1999
Junior Member
Hello andrew,
I have a Question from the answer you have given to Mr.sai..I understand that time will not vary for Insert/Delete...But please explain me how can u say that time will not vary in Querying a Database with 100 records and 1000 records ?

Please explain ..
Re: Kind Attention Andrew... [message #54145 is a reply to message #54139] Thu, 31 October 2002 12:27 Go to previous message
andrew again
Messages: 2577
Registered: March 2000
Senior Member
Well it's all to do with indexing. If you are able to put selective indexes on you table and you use those selective indexes to query the data then very fast:

create table all_objects_tmp_copy as (
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects);

create index all_objects_tmp_copy_ix on all_objects_tmp_copy(object_id);

analyze table all_objects_tmp_copy estimate statistics;

select count(*) from all_objects_tmp_copy;
226040
DECLARE
   t1_start   PLS_INTEGER;
   t1_end     PLS_INTEGER;
   t2_start   PLS_INTEGER;
   t2_end     PLS_INTEGER;
   x          PLS_INTEGER;
BEGIN
   -- I know that 100 is a valid OBJECT_ID in my db

   t1_start := DBMS_UTILITY.get_time;
   -- this will use index
   for i in 1..1000 loop
     select count(*) into x from all_objects_tmp_copy where object_id = 100;
   end loop;
   t1_end := DBMS_UTILITY.get_time;
   DBMS_OUTPUT.put_line ('Hundredths of a sec to do 1000 * test1 (indexed query) :'|| TO_CHAR (t1_end - t1_start));
   
   t2_start := DBMS_UTILITY.get_time;
   -- this will use NOT use index
   for i in 1..1 loop
     select count(*) into x from all_objects_tmp_copy where object_id+0 = 100;
   end loop;
   t2_end := DBMS_UTILITY.get_time;
   DBMS_OUTPUT.put_line ('Hundredths of a sec to do 1 * test2 (NOT indexed query) :' || TO_CHAR (t2_end - t2_start));
END;
/

Hundredths of a sec to do 1000 * test1 (indexed query) :13
Hundredths of a sec to do 1 * test2 (NOT indexed query) :41

As you can see, we did 1000 indexed queries in 13/100 sec. even though the table has 226000 records. Note though that the data is obviously cached from the first query, but I did the query 1000 times to get the time higher - else it showed 0/100 sec indicating that the indexed query took < 1/100 sec.

If your data is such that you can't use indexing - then yes - full tale scans on lots of data can take a long time.
Previous Topic: Net8 Assistant not opening
Next Topic: Urgent
Goto Forum:
  


Current Time: Fri Sep 20 03:40:11 CDT 2024