Thread: [aKregator-devel] WebAkregator
Brought to you by:
lippel
From: Eckhart <ew...@ew...> - 2005-04-11 15:59:21
|
Hi, "it would be even possible to code an Akregator web frontend to access your= =20 feeds from any machine (volunteers are welcome ;-) )." (akregator blog) I really liked the idea of a web frontend, so i decided to think about it (= and=20 asked Frank to help me ;-) ). Now there are several problems that came to our minds: =2D What should be stored on client-side, what on server-side? =2D When to synchronize? =2D Who fetches feeds, server or clients? =2D Which technology should be used? (PHP5, Perl, ...) =2D Which protocol? (XML-RPC, ...) =2D how to minimize traffic? =2D ... Anyone else interested in a web frontend? BTW: There is a wiki page where some ideas are collected: http://akregator.sourceforge.net/wiki/wakka.php?wakka=3DWebAkregator Eckhart |
From: Frank O. <fra...@gm...> - 2005-04-11 16:03:10
|
Hi everyone,=20 as the new archive backend gets in shape (http://akregator.sf.net/blog), we= =20 can start thinking about the details of WebAkregator. Depending on the actu= al=20 implementation our archive interface may need major changes, but it should = be=20 possible.=20 As S=E9bastien is interested in the project, and Eckhart is interested in c= oding=20 the web frontend, I hope we get some useful discussion here. As I have no=20 experience in developing web applications, I leave the first comments to=20 you ;-)=20 =46or now I just dump the issues which came to my mind: * What should be stored on client-side, what on server-side=20 * When to synchronize * Who should fetch feeds, server or clients? * Which technology should be used * how should the protocol look like * how to minimize traffic Happy discussion Regards, =46rank |
From: Eckhart <ew...@ew...> - 2005-04-12 21:05:42
|
Hi, well, Frank and I posted at nearly the same time, so forget my post and=20 continue here please. ;-) Am Montag 11 April 2005 18:03 schrieb Frank Osterfeld: > For now I just dump the issues which came to my mind: > > * Who should fetch feeds, server or clients? My opinion is that the server should fetch the feed. This makes it very=20 unlikely that you miss any feed (I personally know a feed where staying awa= y=20 from the computer one day has the impact that you missed something out). The next point is that the feed archive should be used by multiple clients.= If=20 the clients fetch the articles, they may fetch it simultaneously if they ar= e=20 running Akregator at the same time. This would make feed providers unhappy= =20 about Akregator. Thirdly, there is a problem with archive distribution. If the feeds are=20 fetched by one client, they have to be uploaded to the server anyway to mak= e=20 sure it is available there for other clients. P2P sharing beetween two=20 clients seems to be unpractical. > * how to minimize traffic The client always caches the articles. It asks the server whether new artic= les=20 are available. The server responds with what has changed since the last cal= l.=20 The client asks the server to send him the missing articles, and deletes th= e=20 expired articles from his cache. Additionally: Security =E2=80=93 yes, I take this seriously ;-)=20 Client-side: The client must not react badly on whatever the server sends t= o=20 it. Server-side: The server must not react badly on what a client sends to it, = or=20 on what the feeds he fetches contain. Be aware of CSS inside the web=20 frontend! Eckhart |