
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n). My questions are: 1) Is last function is something like "black box" written in C++ which perform O(1)? So I shouldn't even try to imagine some haskell O(1) equivalent. 2) Or will optimizer (llvm?) reduce init&last complexity to 1? 3) Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell? -- Regards, Marat.

On 2014-07-17 11:35, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n).
On viable option might be to adjust the algorithm such that instead of appending to the list, it prepends (which is O(1) in Haskell), i.e. you manage the list in reverse and only actually reverse it to the original order when needed (e.g. when printing). -- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing

Thanks, but maybe I badly formulated my question. I'd like to insert
at the beginning of the list and remove from the tail both in O(1).
2014-07-17 13:39 GMT+04:00 Frerich Raabe
On 2014-07-17 11:35, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n).
On viable option might be to adjust the algorithm such that instead of appending to the list, it prepends (which is O(1) in Haskell), i.e. you manage the list in reverse and only actually reverse it to the original order when needed (e.g. when printing).
-- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат.

On 2014-07-17 11:47, Закиров Марат wrote:
Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
...then you might find DList interesting, see http://hackage.haskell.org/package/dlist-0.5/docs/Data-DList.html -- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing

On Thu, Jul 17, 2014 at 11:55:31AM +0200, Frerich Raabe wrote:
On 2014-07-17 11:47, Закиров Марат wrote:
Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
...then you might find DList interesting, see
http://hackage.haskell.org/package/dlist-0.5/docs/Data-DList.html
How does DList support O(1) removal from the tail?

On 2014-07-17 12:02, Tom Ellis wrote:
On Thu, Jul 17, 2014 at 11:55:31AM +0200, Frerich Raabe wrote:
On 2014-07-17 11:47, Закиров Марат wrote:
Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
...then you might find DList interesting, see
http://hackage.haskell.org/package/dlist-0.5/docs/Data-DList.html
How does DList support O(1) removal from the tail?
Oops, sorry, it doesn't. I misread the mail. -- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing

Question to all,
Tom said that finger tree gives us O(1) on removing last element, but
in haskell all data is persistent.
So function should return list as is minus last element. How it could
be O(1)? This is just blows my mind...
My hypothesis is that somehow compiler reduces creating of a new list
to just adding or removing one element. If it is not so.
Then even ':' which is just adding to list head would be an O(n)
operation just because it should return brand new list with one elem
added. Or maybe functional approach uses pretty much different
complexity metric, there copying of some structure "list" for example
is just O(1)? If so then Q about compiler is still exists.
2014-07-17 14:07 GMT+04:00 Frerich Raabe
On 2014-07-17 12:02, Tom Ellis wrote:
On Thu, Jul 17, 2014 at 11:55:31AM +0200, Frerich Raabe wrote:
On 2014-07-17 11:47, Закиров Марат wrote:
Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
...then you might find DList interesting, see
http://hackage.haskell.org/package/dlist-0.5/docs/Data-DList.html
How does DList support O(1) removal from the tail?
Oops, sorry, it doesn't. I misread the mail.
-- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат.

On Thu, Jul 17, 2014 at 03:34:17PM +0400, Закиров Марат wrote:
Tom said that finger tree gives us O(1) on removing last element, but in haskell all data is persistent. So function should return list as is minus last element. How it could be O(1)? This is just blows my mind...
Sounds like magic doesn't it :)
My hypothesis is that somehow compiler reduces creating of a new list to just adding or removing one element. If it is not so. Then even ':' which is just adding to list head would be an O(n) operation just because it should return brand new list with one elem added. Or maybe functional approach uses pretty much different complexity metric, there copying of some structure "list" for example is just O(1)? If so then Q about compiler is still exists.
But no, there's no compiler magic, just an amazing datastructure. The caveat is that the complexity is amortised, not guaranteed for every operation. Have a look at the paper if you learn about how it works. It's linked from the Hackage docs. http://hackage.haskell.org/package/containers-0.2.0.1/docs/Data-Sequence.htm... Tom

There are also purely functional worst case constant time (not amortized) queues out there. I'm on my phone right now, otherwise I'd try to hunt a package to link. Some of them are actually pretty simple and easy to remember how to write. Okasaki's Purely Functional Data Structures describes some. To give an idea of how far it can go, there are even (much, much more complicated) worst case constant time doubled ended queues that support constant time concatenation (again, without amortization). On Jul 17, 2014 7:44 AM, "Tom Ellis" < tom-lists-haskell-cafe-2013@jaguarpaw.co.uk> wrote:
On Thu, Jul 17, 2014 at 03:34:17PM +0400, Закиров Марат wrote:
Tom said that finger tree gives us O(1) on removing last element, but in haskell all data is persistent. So function should return list as is minus last element. How it could be O(1)? This is just blows my mind...
Sounds like magic doesn't it :)
My hypothesis is that somehow compiler reduces creating of a new list to just adding or removing one element. If it is not so. Then even ':' which is just adding to list head would be an O(n) operation just because it should return brand new list with one elem added. Or maybe functional approach uses pretty much different complexity metric, there copying of some structure "list" for example is just O(1)? If so then Q about compiler is still exists.
But no, there's no compiler magic, just an amazing datastructure. The caveat is that the complexity is amortised, not guaranteed for every operation. Have a look at the paper if you learn about how it works. It's linked from the Hackage docs.
http://hackage.haskell.org/package/containers-0.2.0.1/docs/Data-Sequence.htm...
Tom _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jake It would be great if you give some examples when find your
notebook :) And link to the book about pure functional data structures
which you are talking about.
Also If some "haskell.org" maintainers are here I'd like to recommend
them to pay more attention to optimality/performance questions.
Because almost first question which is apeared in head of standart
C/C++ programmer is "Do I get same perfomance?" (even if he do not
need it).
Maybe some simple and cool PDF tutorial which describes why haskell
could be as fast as others will be great to have.
2014-07-17 17:04 GMT+04:00 Jake McArthur
There are also purely functional worst case constant time (not amortized) queues out there. I'm on my phone right now, otherwise I'd try to hunt a package to link. Some of them are actually pretty simple and easy to remember how to write. Okasaki's Purely Functional Data Structures describes some.
To give an idea of how far it can go, there are even (much, much more complicated) worst case constant time doubled ended queues that support constant time concatenation (again, without amortization).
On Jul 17, 2014 7:44 AM, "Tom Ellis"
wrote: On Thu, Jul 17, 2014 at 03:34:17PM +0400, Закиров Марат wrote:
Tom said that finger tree gives us O(1) on removing last element, but in haskell all data is persistent. So function should return list as is minus last element. How it could be O(1)? This is just blows my mind...
Sounds like magic doesn't it :)
My hypothesis is that somehow compiler reduces creating of a new list to just adding or removing one element. If it is not so. Then even ':' which is just adding to list head would be an O(n) operation just because it should return brand new list with one elem added. Or maybe functional approach uses pretty much different complexity metric, there copying of some structure "list" for example is just O(1)? If so then Q about compiler is still exists.
But no, there's no compiler magic, just an amazing datastructure. The caveat is that the complexity is amortised, not guaranteed for every operation. Have a look at the paper if you learn about how it works. It's linked from the Hackage docs.
http://hackage.haskell.org/package/containers-0.2.0.1/docs/Data-Sequence.htm...
Tom _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат.

On Thu, 17 Jul 2014 16:47:43 +0200, Закиров Марат
Jake It would be great if you give some examples when find your notebook :) And link to the book about pure functional data structures which you are talking about. Also If some "haskell.org" maintainers are here I'd like to recommend them to pay more attention to optimality/performance questions. Because almost first question which is apeared in head of standart C/C++ programmer is "Do I get same perfomance?" (even if he do not need it). Maybe some simple and cool PDF tutorial which describes why haskell could be as fast as others will be great to have.
The introduction to Haskell[0], linked from the front page, mentions performance; if you type the word performance in the search field, you will get the main performance page[1], that links to a lot of other pages about performance. The The Computer Language Benchmarks Game[2] shows how well Haskell(GHC) performs, compared to other (implementations of) languages. Everybody is welcome to request a HaskellWiki account[3] and update pages[4]. Regards, Henk-Jan van Tuyl [0] https://www.haskell.org/haskellwiki/Introduction [1] http://www.haskell.org/haskellwiki/Performance [2] http://benchmarksgame.alioth.debian.org/ [3] http://www.haskell.org/haskellwiki/index.php?title=Special:UserLogin&returnto=HaskellWiki%3ACommunity [4] http://www.haskell.org/haskellwiki/HaskellWiki:Contributing -- Folding@home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming --

Закиров Марат
My hypothesis is that somehow compiler reduces creating of a new list to just adding or removing one element. If it is not so. Then even ':' which is just adding to list head would be an O(n) operation just because it should return brand new list with one elem added. Or maybe functional approach uses pretty much different complexity metric, there copying of some structure "list" for example is just O(1)? If so then Q about compiler is still exists.
There is no magic in the compiler. It's a nice side-effect of having immutable data. If we copied the whole list, that would would be O(n), but we don't need to. We just make references to the old data. In a language like C++ that would be dangerous, since modifying the old list would affect the new one. It's safe in Haskell since data is immutable, so nobody can affect our code by mutating the data we're referencing. Finger trees are complicated, so I'll demonstrate it with a simple implementation of lists. These will work the same way as Haskell's built in lists, except for the types: l1 = (20, (10, ())) This works just like "20 : 10 : []". Now, internally this is not represented as one big array in memory. Instead, each part is stored separately and joined together like this (ie. it's a singly-linked list): l1 = (20, l2) l2 = (10, ()) Since l2 and "()" are immutable, there's no point copying their values around, so the runtime will just use pointers to a single copy. Prepending an element works in the same way, ie. there's no need to copy l1: l3 = (30, l1) Appending to the end is a different story. We can't reference any existing value, since their pointers will be wrong. For example "(30, (20, (10, (40, ()))))" would become: l4 = (30, l5) l5 = (20, l6) l6 = (10, l7) l7 = (40, ()) The only new part here is l7, which is the "40" we've added, but we couldn't re-use l2 for the "10" since its pointer goes to "()" and we need it to go to l7. We're forced to make a new value l6 which has the pointer we need. Of course, the problem has just been shifted since we now can't use l1 for our "20" because it's pointing to l2 and we need one which points to l6. That forces us to define the new value l5, and so on all the way along the list, which is why it takes O(n) time. Datastructures like Finger Trees use references to existing values in much the same way, but they manage these references in clever ways so that operations have small effects (like prepending to a list) rather than expensive chain-reactions (like appending to a list). Cheers, Chris PS : DLists work very differently, since they're created out of functions!

data SnocList a = SnocList ([a] -> [a])
Inserts to the front and end in O(1).
On 17/07/2014 7:47 PM, "Закиров Марат"
Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
2014-07-17 13:39 GMT+04:00 Frerich Raabe
: On 2014-07-17 11:35, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n).
On viable option might be to adjust the algorithm such that instead of appending to the list, it prepends (which is O(1) in Haskell), i.e. you manage the list in reverse and only actually reverse it to the original order when needed (e.g. when printing).
-- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thank you all guys, really your answers were helpful,
Also I noticed than most of haskell people is in faaaar west from Moscow.
Lisp queue are really cool, thanks Richard (something like that were
been cooking in my head).
I do think that imperative languages are horrible they are unreadable
(they are just "sugar" version of Turing machine :) ).
They constrain SW parallelism with pointer aliasing.
Thay are need to be executed on CPU with retirement buffer.
So my preferences is on functional side...
The big question is all about: how to make code which has ~same
performance in single core CPU with algorithms which are not too much
complex.
But lets speak about problem.
I need to form array of local mins observe.
Radius = 1
In [1, 2, 0, 3, 5, 7, 4] Out [1, 0, 0, 0, 3, 4, 4]
A itself a bit complicated and spaghetti like in my C version.
But verbally it sounds like:
I dynamically form (or support or provide) o sequence of indices's of
strictly increasing elems.
To perform that I push in head new elem just after I have poped all
stuff which is bigger or equal.
Also I remove from the tail all elems that have range bigger than
Radius. And than I just put elem to out array.
Pretty simple huh?
This is a part of bigger procedure (which is image filtration) which
operates on matrix and calculate local mins and maxs by reusing of
described above algorithm in a pretty straightforward way and obtains
O(n) complexity.
--Marat
2014-07-18 10:04 GMT+04:00 Tony Morris
data SnocList a = SnocList ([a] -> [a])
Inserts to the front and end in O(1).
On 17/07/2014 7:47 PM, "Закиров Марат"
wrote: Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
2014-07-17 13:39 GMT+04:00 Frerich Raabe
: On 2014-07-17 11:35, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n).
On viable option might be to adjust the algorithm such that instead of appending to the list, it prepends (which is O(1) in Haskell), i.e. you manage the list in reverse and only actually reverse it to the original order when needed (e.g. when printing).
-- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат.

Note: all of the options for playing with lists and queues and fingertrees
come with trade-offs.
Finger trees give you O(log n) appends and random access, O(1)
cons/uncons/snoc/unsnoc etc. but _cost you_ infinite lists.
Realtime queues give you the O(1) uncons/snoc. There are catenable output
restricted deques that can preserve those and can upgrade you to O(1)
append, but we've lost unsnoc and random access along the way.
Skew binary random access lists give you O(log n) drop and random access
and O(1) cons/uncons, but lose the infinite lists, etc.
Tarjan and Mihaescu's deque may get you back worst-case bounds on more of
the, but we still lose O(log n) random access and infinite lists.
Difference lists give you an O(1) append, but alternating between
inspection and construction can hit your asymptotics.
Lists are used by default because they cleanly extend to the infinite
cases, anything more clever necessarily loses some of that power.
-Edward
On Fri, Jul 18, 2014 at 2:04 AM, Tony Morris
data SnocList a = SnocList ([a] -> [a])
Inserts to the front and end in O(1). On 17/07/2014 7:47 PM, "Закиров Марат"
wrote: Thanks, but maybe I badly formulated my question. I'd like to insert at the beginning of the list and remove from the tail both in O(1).
2014-07-17 13:39 GMT+04:00 Frerich Raabe
: On 2014-07-17 11:35, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n).
On viable option might be to adjust the algorithm such that instead of appending to the list, it prepends (which is O(1) in Haskell), i.e. you manage the list in reverse and only actually reverse it to the original order when needed (e.g. when printing).
-- Frerich Raabe - raabe@froglogic.com www.froglogic.com - Multi-Platform GUI Testing _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Marat. С уважением Марат. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Jul 17, 2014 at 01:35:58PM +0400, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n). My questions are: 1) Is last function is something like "black box" written in C++ which perform O(1)? So I shouldn't even try to imagine some haskell O(1) equivalent. 2) Or will optimizer (llvm?) reduce init&last complexity to 1? 3) Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell?
I'm rather confused about your question. If you want a Haskell data structure that supports O(1) head, tail, init and last why not indeed use Data.Sequence as has been suggested? As for how it's implemented, it uses the (very cool) fingertree datastructure. See here for more details: http://hackage.haskell.org/package/containers-0.2.0.1/docs/Data-Sequence.htm... Tom

On Thu, Jul 17, 2014 at 01:35:58PM +0400, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(.
Cheer up my friend. You know what makes _me_ sad? 250+ soldiers, fine Ukrainian men in their prime, dead; 1000+ wounded. In two months... http://www.pravda.com.ua/news/2014/07/15/7031993/ Anyway.
Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n). My questions are: 1) Is last function is something like "black box" written in C++ which perform O(1)? So I shouldn't even try to imagine some haskell O(1) equivalent. 2) Or will optimizer (llvm?) reduce init&last complexity to 1? 3) Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell? -- Regards, Marat. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Семен Тригубенко http://trygub.com

Hi, Закиров Марат wrote:
Is last function is something like "black box" written in C++ which perform O(1)?
This is not really about Haskell vs. C++, but about how the data structures look like in memory. If you have single-linked list in memory: root --> 1 --> 2 --> ... --> n And you only have a pointer to 1, you cannot even access the n'th element without following n pointers. So last for a single-linked list is necessarily O(n), independently of language. Let's look at adding an element to the front of the list. We start with this: old --> 1 --> 2 --> ... --> n and we want to end up with this: new --> 0 --> 1 --> 2 --> ... -> n This looks bad at first, because it seems that we have to copy the original list before putting the 0 in front. The trick is to just use the old list in the new list without copying it: old --> 1 --> 2 --> ... --> n ^ | new --> 0 In C++, this trick would be dangerous: If one part of the code would use the old pointer to modify the list, this modification would be visible to some other part of the code using the new pointer. In Haskell, no code can modify the list once it is created. (Note that from the point of view of 1, nothing changed, because the 1 cannot see the 0). So this problem doesn't arise and we can freely share most the list between old and new. All we did was allocate 1 heap cell for the new list element, so this operation is O(1). But what if we want to append to the end of the list? We start with this: old --> 1 --> 2 --> ... --> n And we want to end up with this: new --> 1 --> 2 --> ... --> n --> n + 1 We cannot even reach the n without doing O(n) operations. Even worse, we cannot reuse any of the old list, because the difference between the old and the new list is visible to every heap cell. So we have to allocate n + 1 new heap cells.
Or will optimizer (llvm?) reduce init&last complexity to 1?
No.
Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell?
They use a different data structure that's carefully designed to have just the right pointer where and when you need it. The API documentation of Data.Sequence mentions the complexity of each function. hackage.haskell.org/package/containers/docs/Data-Sequence.html If you want to learn how that works, you can follow the source links on the API documentation or reads the research paper they link to. Tillmann

hi,
Is last function is something like "black box" written in C++ which perform O(1)?
This is not really about Haskell vs. C++, but about how the data structures look like in memory.
as tillmann said, it's about the data structures. Data.Sequence sounds like it fits your algorithm very well. but maybe one of the array libraries (vector, array, repa, accelerate) suits you better. there are also mutable variants that live in `ST` or `IO`, so you could translate your algorithm without any difference. good luck, tobias florek

Oddly enough, I don't think anyone has attempted to answer the original
questions, so I'll try to do so.
On Thu, Jul 17, 2014 at 5:35 PM, Закиров Марат
I am teaching myself haskell. The first impression is very good. But phrase "haskell is polynomially reducible" is making me sad :(. Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1). But the original haskell functions last and init are O(n). My questions are: 1) Is last function is something like "black box" written in C++ which perform O(1)?
No. Haskell lists are equivalent to linked lists in C. If you know how to get the last element of a singly-linked list in O(1), you should be able to implement the same algorithm in Haskell (hint: it's not possible, but if you're more familiar with C maybe this will help you see why).
So I shouldn't even try to imagine some haskell O(1) equivalent. 2) Or will optimizer (llvm?) reduce init&last complexity to 1?
Generally no, for the same reasons.
3) Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell?
By using a different data structure. The finger tree data structure is designed to have O(1) access to either end. Access to elements near the middle is slower, but the worse case is some complicated log factor so it's still better than O(n). I expect your C algorithm uses arrays, not linked lists. You could do exactly the same in Haskell (using mutable arrays) and then you would have the same algorithmic complexity as your C code. The big downside is that mutable array operations are in IO or ST. John L.

On 17/07/2014, at 9:35 PM, Закиров Марат wrote:
I am teaching myself haskell. The first impression is very good... Anyway I am trying to backport my algorithm written in C. The key to performance is to have ability to remove element from the end of a list in O(1).
You can't. Not in *any* programming language. That's because lists are one of many possible implementations of the "sequence" concept, and they are optimised to support some operations at the expense of others. At the beginning level, you should think of all Haskell data structures as immutable; fixed; frozen; forever unchanged. You can't even remove an element from the front of a Haskell list, at all. All you can do is to forget about the original list and concentrate on its tail.
But the original haskell functions last and init are O(n).
Haskell lists are singly linked lists. Even by going to assembly code, you could not make these operations O(1) without *using a different data structure*.
My questions are: 1) Is last function is something like "black box" written in C++ which perform O(1)?
No.
2) Or will optimizer (llvm?) reduce init&last complexity to 1?
No.
3) Some people suggest to use sequences package, but still how do they implement O(1) init&last sequences equivalent in haskell?
Well, you could try reading Chris Okasaki's functional data structures book. There is a classic queue representation devised for Lisp last century which represents by ([a,b],[e,d,c]) so that you can push and pop at either end. When the end you are working on runs out, you reverse the other end, e.g., ([],[e,d,c]) -> ([c,d,e],[]). That can give you a queue with *amortised* constant time. (There is a technical issue which I'll avoid for now.) But let's start at the beginning. You have an interesting problem, P. You have an algorithm for it, A, written in C. You want an algorithm for it, H, written in Haskell. Your idea is to make small local syntactic changes to A to turn in into H. That's probably going to fail, because C just loves to smash things, and Haskell hates to. Maybe you should be using quite a different approach, one that would be literally unthinkable in C. After all, being able to do things that are unthinkable in C is one of the reasons for learning Haskell. Why not tell us what problem P is?

On Thu, Jul 17, 2014 at 8:54 PM, Richard A. O'Keefe
But the original haskell functions last and init are O(n).
Haskell lists are singly linked lists. Even by going to assembly code, you could not make these operations O(1) without *using a different data structure*.
At this point, you may be wondering why Haskell uses singly linked lists so much, when one might think an array/vector type might be more generally useful. The thing about functional languages, and in particular pure functional languages such as Haskell, is that you don't generally have things like looping constructs; instead, you represent your data in a form which implies such a construct. In particular, when you might use a for/foreach loop in other languages, in a functional language you use a singly linked list and a map or fold over the list. (Some object-oriented languages do this as well; consider the "each" method in Smalltalk or Ruby.) In a pure functional language like Haskell, this can lead to optimizations you would not get from an explicit foreach loop: since values are immutable and can be generated lazily, a compiler can more easily recognize that it doesn't need to generate or maintain a list at all but just consume values as they are generated. Languages that use foreach loops often can only do this optimization in specific cases: for example, in Perl a foreach loop on a constant numeric range is optimized this way, but not others; in Ruby, it's much harder to do this at all because the "each" method needs to be resolved to a list object; and commercial Smalltalk implementations generally have quite a lot of behind the scenes intelligence to try to recognize and optimize these kinds of cases without falling into the same trap as e.g. Ruby. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net
participants (15)
-
Brandon Allbery
-
Chris Warburton
-
Edward Kmett
-
Frerich Raabe
-
Henk-Jan van Tuyl
-
Jake McArthur
-
Janis Voigtländer
-
John Lato
-
Richard A. O'Keefe
-
Semen Trygubenko / Семен Тригубен ко
-
Tillmann Rendel
-
Tobias Florek
-
Tom Ellis
-
Tony Morris
-
Закиров Марат