Discussion:
Array as FIFO stack
Frank Bitterlich
2007-04-11 10:23:47 UTC
Permalink
Hi,

I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.

The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.

Has anybody experience with this kind of operations, so that I don't
need to test it on my own?

Cheers,
Frank+++

--

Günter Schmidt GmbH
Frank Bitterlich eMail: bitterlich-***@public.gmane.org
Ben-Gurion-Ring 21 WWW: http://www.gsco.de/
D-60437 Frankfurt Tel.: 069 / 156809-29
GERMANY Fax: 069 / 156809-28
Geschäftsführer: Jürgen Hartwich
AG Frankfurt am Main, HRB 76504 - USt.-ID: DE235219624




_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Guyren Howe
2007-04-11 12:25:35 UTC
Permalink
Post by Frank Bitterlich
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
The fastest solution given what you've described would be an array of
10,000 elements (or even a memory block, depending on what types of
values they are), which you don't resize. Instead, you would keep
head and tail position values, and maintain everything that way. When
you reached the end of the array, you'd wrap around the other side.

Does that make sense.

Regards,

Guyren G Howe
Relevant Logic LLC

guyren-at-relevantlogic.com ~ http://relevantlogic.com

REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training


_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Frank Bitterlich
2007-04-11 12:48:49 UTC
Permalink
Dang - a ring buffer. Should have thought of that myself :)

That's certainly the quickest way!

Thanks,
Frank+++
Post by Guyren Howe
Post by Frank Bitterlich
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
The fastest solution given what you've described would be an array of
10,000 elements (or even a memory block, depending on what types of
values they are), which you don't resize. Instead, you would keep
head and tail position values, and maintain everything that way. When
you reached the end of the array, you'd wrap around the other side.
Does that make sense.
Regards,
Guyren G Howe
Relevant Logic LLC
--

Günter Schmidt GmbH
Frank Bitterlich eMail: bitterlich-***@public.gmane.org
Ben-Gurion-Ring 21 WWW: http://www.gsco.de/
D-60437 Frankfurt Tel.: 069 / 156809-29
GERMANY Fax: 069 / 156809-28
Geschäftsführer: Jürgen Hartwich
AG Frankfurt am Main, HRB 76504 - USt.-ID: DE235219624




_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Fargo Holiday
2007-04-11 17:39:01 UTC
Permalink
Post by Guyren Howe
Post by Frank Bitterlich
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
The fastest solution given what you've described would be an array of
10,000 elements (or even a memory block, depending on what types of
values they are), which you don't resize. Instead, you would keep
head and tail position values, and maintain everything that way. When
you reached the end of the array, you'd wrap around the other side.
Does that make sense.
Regards,
Guyren G Howe
Relevant Logic LLC
guyren-at-relevantlogic.com ~ http://relevantlogic.com
REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training
Howdy,
I'm afraid I didn't quite follow what you mean there. Could someone post
a simple example?

Thanks,
Fargo
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Guyren Howe
2007-04-11 17:43:26 UTC
Permalink
Post by Fargo Holiday
Post by Guyren Howe
Post by Frank Bitterlich
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
The fastest solution given what you've described would be an array of
10,000 elements (or even a memory block, depending on what types of
values they are), which you don't resize. Instead, you would keep
head and tail position values, and maintain everything that way. When
you reached the end of the array, you'd wrap around the other side.
Howdy,
I'm afraid I didn't quite follow what you mean there. Could someone post
a simple example?
Imagine the 10,000 elements are arranged, not in a line, but in a
circle. You keep track of two numbers, representing where in the
circle the head and tail of your list are. When these meet, you
extract the first 5,000 elements from the list, moving the tail up.
Now, because it's a ring, you have room in front of the head of the
list for another 5,000 elements. Lather, rinse and repeat.

Regards,

Guyren G Howe
Relevant Logic LLC

guyren-at-relevantlogic.com ~ http://relevantlogic.com

REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training


_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Fargo Holiday
2007-04-11 17:56:37 UTC
Permalink
Post by Guyren Howe
Post by Fargo Holiday
Post by Guyren Howe
Post by Frank Bitterlich
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
The fastest solution given what you've described would be an array of
10,000 elements (or even a memory block, depending on what types of
values they are), which you don't resize. Instead, you would keep
head and tail position values, and maintain everything that way. When
you reached the end of the array, you'd wrap around the other side.
Howdy,
I'm afraid I didn't quite follow what you mean there. Could someone post
a simple example?
Imagine the 10,000 elements are arranged, not in a line, but in a
circle. You keep track of two numbers, representing where in the
circle the head and tail of your list are. When these meet, you
extract the first 5,000 elements from the list, moving the tail up.
Now, because it's a ring, you have room in front of the head of the
list for another 5,000 elements. Lather, rinse and repeat.
Regards,
Guyren G Howe
Relevant Logic LLC
guyren-at-relevantlogic.com ~ http://relevantlogic.com
REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training
Ah, ok. Too simple for my un-caffeinated mind. Here's another
"brilliant" question to display the depth of my experience: What is the
advantage of doing it this way, versus a looping remove? Don't you end
up doing essentially the same task when extracting?

Thanks,
Fargo
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Guyren Howe
2007-04-11 18:06:29 UTC
Permalink
Post by Fargo Holiday
Post by Guyren Howe
Imagine the 10,000 elements are arranged, not in a line, but in a
circle. You keep track of two numbers, representing where in the
circle the head and tail of your list are. When these meet, you
extract the first 5,000 elements from the list, moving the tail up.
Now, because it's a ring, you have room in front of the head of the
list for another 5,000 elements. Lather, rinse and repeat.
Ah, ok. Too simple for my un-caffeinated mind. Here's another
"brilliant" question to display the depth of my experience: What is the
advantage of doing it this way, versus a looping remove? Don't you end
up doing essentially the same task when extracting?
The original question was about speed. This is fast because you don't
have to shift or even remove elements. You just move the head and
tail pointers around and reuse the elements of the array. You could
just naively use the array, rediming and whatever you want to. But
this will be hundreds of times faster than that.

Regards,

Guyren G Howe
Relevant Logic LLC

guyren-at-relevantlogic.com ~ http://relevantlogic.com

REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training


_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Fargo Holiday
2007-04-11 23:42:36 UTC
Permalink
Post by Guyren Howe
Post by Fargo Holiday
Post by Guyren Howe
Imagine the 10,000 elements are arranged, not in a line, but in a
circle. You keep track of two numbers, representing where in the
circle the head and tail of your list are. When these meet, you
extract the first 5,000 elements from the list, moving the tail up.
Now, because it's a ring, you have room in front of the head of the
list for another 5,000 elements. Lather, rinse and repeat.
Ah, ok. Too simple for my un-caffeinated mind. Here's another
"brilliant" question to display the depth of my experience: What is the
advantage of doing it this way, versus a looping remove? Don't you end
up doing essentially the same task when extracting?
The original question was about speed. This is fast because you don't
have to shift or even remove elements. You just move the head and
tail pointers around and reuse the elements of the array. You could
just naively use the array, rediming and whatever you want to. But
this will be hundreds of times faster than that.
Regards,
Guyren G Howe
Relevant Logic LLC
guyren-at-relevantlogic.com ~ http://relevantlogic.com
REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training
Ahhhh, ok. See, now that I've had a day of colas and naps while
commuting, I finally understand. I didn't realize you were talking about
just re-using the elements, but instead had imagined that you were
speaking of still reading and removing elements.

That's really handy. I vaguely recall doing something like that with a
bitfield in C/C++, but it had faded way into the background to join
other concepts I'd learned and never used enough.

Thanks,
Fargo
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Guyren Howe
2007-04-12 02:00:28 UTC
Permalink
Post by Fargo Holiday
Post by Guyren Howe
Post by Fargo Holiday
Post by Guyren Howe
Imagine the 10,000 elements are arranged, not in a line, but in a
circle. You keep track of two numbers, representing where in the
circle the head and tail of your list are. When these meet, you
extract the first 5,000 elements from the list, moving the tail up.
Now, because it's a ring, you have room in front of the head of the
list for another 5,000 elements. Lather, rinse and repeat.
Ah, ok. Too simple for my un-caffeinated mind. Here's another
"brilliant" question to display the depth of my experience: What is the
advantage of doing it this way, versus a looping remove? Don't you end
up doing essentially the same task when extracting?
The original question was about speed. This is fast because you don't
have to shift or even remove elements. You just move the head and
tail pointers around and reuse the elements of the array. You could
just naively use the array, rediming and whatever you want to. But
this will be hundreds of times faster than that.
Ahhhh, ok. See, now that I've had a day of colas and naps while
commuting, I finally understand. I didn't realize you were talking about
just re-using the elements, but instead had imagined that you were
speaking of still reading and removing elements.
That's really handy. I vaguely recall doing something like that with a
bitfield in C/C++, but it had faded way into the background to join
other concepts I'd learned and never used enough.
Worth mentioning also: you didn't say what you were keeping in the
array. But if it's objects, then there's another speedup available to
you: rather than creating and destroying the objects in the array,
just reuse them. Give them an Initialize call that sets them back
what a new object looks like, and when you don't need them any more,
just reinitialize them. Or even don't; and just set their state to
whatever you need the first time you reuse them. If they need to
refer to more objects in turn, possibly reuse them as well. This way,
you're not hitting up the memory manager all the time. This will also
give you a significant speedup.

If you're keeping strings, you could try to do something similar:
say, use a memoryblock to keep each string, set to the biggest size
you expect them to be.

There are other similar reuse strategies available to you: for
example, connection pooling if any of this is network-related.

Oh, and note this: you can implement the reuse thing by having the
objects put themselves into a pool array in their destructor. By
doing that, there is a reference to them again, and they won't get
reclaimed. I recently checked this with Mars, and he said that this
would work just fine.

Regards,

Guyren G Howe
Relevant Logic LLC

guyren-at-relevantlogic.com ~ http://relevantlogic.com

REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training


_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Frank Bitterlich
2007-04-12 10:47:09 UTC
Permalink
Post by Guyren Howe
Post by Fargo Holiday
...The original question was about speed. This is fast because
you don't
have to shift or even remove elements. You just move the head and
tail pointers around and reuse the elements of the array. You could
just naively use the array, rediming and whatever you want to. But
this will be hundreds of times faster than that.
Ahhhh, ok. See, now that I've had a day of colas and naps while
commuting, I finally understand. I didn't realize you were talking about
just re-using the elements, but instead had imagined that you were
speaking of still reading and removing elements.
That's really handy. I vaguely recall doing something like that with a
bitfield in C/C++, but it had faded way into the background to join
other concepts I'd learned and never used enough.
Worth mentioning also: you didn't say what you were keeping in the
array.
I (the OP) am just keeping doubles or integers there - so the
question was just about avoiding to shuffle values around in the array.
Post by Guyren Howe
Oh, and note this: you can implement the reuse thing by having the
objects put themselves into a pool array in their destructor. By
doing that, there is a reference to them again, and they won't get
reclaimed. I recently checked this with Mars, and he said that this
would work just fine.
Now that's an interesting sidenote...

Do I understand this correctly - you can prevent an object from being
destroyed when the destructor is already executed? Hm. Interesting
concept. Offers a lot of possibilities.

Would that mean the the Destructor will be called again when the last
reference is dropped again afterwards?

Thanks,
Frank+++


--

Günter Schmidt GmbH
Frank Bitterlich eMail: bitterlich-***@public.gmane.org
Ben-Gurion-Ring 21 WWW: http://www.gsco.de/
D-60437 Frankfurt Tel.: 069 / 156809-29
GERMANY Fax: 069 / 156809-28
Geschäftsführer: Jürgen Hartwich
AG Frankfurt am Main, HRB 76504 - USt.-ID: DE235219624




_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Guyren Howe
2007-04-12 16:14:28 UTC
Permalink
Post by Frank Bitterlich
Do I understand this correctly - you can prevent an object from being
destroyed when the destructor is already executed? Hm. Interesting
concept. Offers a lot of possibilities.
Would that mean the the Destructor will be called again when the last
reference is dropped again afterwards?
Yes and yes.


Regards,

Guyren G Howe
Relevant Logic LLC

guyren-at-relevantlogic.com ~ http://relevantlogic.com

REALbasic, PHP, Ruby/Rails, Python programming
PostgreSQL, MySQL database design and consulting
Technical writing and training


_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>

Charles Yeomans
2007-04-11 16:25:31 UTC
Permalink
Post by Frank Bitterlich
Hi,
I need to maintain an integer array to be used as a FIFO stack. I
will .append() up to 10000 values, and when the array reaches that
size, I will "cut off" the oldest (lowest) 5000 entries. Right now
I'm doing this by using .remove() in a loop.
The thing needs to be as time-efficient as possible, so I wonder if
it would be more efficient if I created a new array and copied over
the last 5000, rather than removing the first 5000.
Has anybody experience with this kind of operations, so that I don't
need to test it on my own?
If it needs to be fast, you should allocate the array in advance and
use a stack pointer or two to keep track of the current stack size
and the current location.

Charles Yeomans

_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>
Loading...