xmlgraphics-batik-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Couch <jus...@vlc.com.au>
Subject Re: GVTBuilder error with custom DOMs
Date Mon, 18 Mar 2002 11:59:20 GMT
Chris Lilley wrote:
> Ivan Herman, I suspect.

Ta. I had Ivan in my mind, and kept thinking Ivan Sutherland and knew 
that wasn't right :)

> My point was the geometry - going from 2D to 3D is not a case of "add
> one more 

Ah. Ok.

> JC> So we sort of do the second option - we have an DOM and an internal 
> JC> structure. To us, CSS is just another external input to the system, 
> JC> another structure that the core rendering engine uses during the 
> JC> rendering cycle. It looks for changes in the CSS and applies that during 
> JC> the next cycle, just like any other change that would come from the DOM.
> 
> Okay, but changes in the DOM affect the results of the CSS.

True, but those changes don't have any effect until that next render 
cycle. Effectively all the changes are batched together and then the 
render engine just applies the lot during the next frame.

> JC>  After parsing the CSS,
> JC> it is just another, separate, input to the rendering engine.
> 
> 
> I agree its another input, but its not very separate. At minimum there
> is a high degree of crosslinking of structures. You seem to have the
> view that CSS is an api that makes changes to the DOM. This is not
> correct, it never changes the DOM (more strictly: it never alters the
> infoset).

Ok, I probably didn't really explain myself that well. I understand that 
CSS is not an API, but rather a decorator. An API can change the CSS 
(Such as DOM-CSS) and then those changes are propogated into the 
rendering core. In this way, it is not altering the DOM, but the 
renderer still needs to understand state changes in both - including the 
addition and removal of the complete CSS.

On one point though I disagree. There is no crossing-linking between the 
strucures. A CSS can exist standalone of the XML document, just as an 
XML document can exist separately from a CSS. One does not make changes 
to the other. The linking is only made at runtime, and not by the DOM or 
by the CSS in-memory representation. The linking is made by the 
rendering engine. If nothing renders, then the link between the two is 
only whatever the external viewer (talking about an arbitrary piece of 
userland code here)  decides to make of it.

> JC> The core
> JC> listens to events in the external APIs and makes the appropriate changes 
> JC> to the visual output. In this way, the DOM has its view, the CSS has its 
> JC> own, just as required in an earlier email. To boot, there's a hell of a 
> JC> lot less management code needed too.
> 
> I don't see how the last sentence follows from the preceding ones.

If you consider each input to the rendered output, then DOM and CSS are 
two independent items. At the moment, the Batik rendering core assumes 
that they are inseparable, which is a fundamental point of disagreement 
here. If you treat the rendering core as just that, and both DOM and CSS 
data structures as separable items that make occasional (in the grand 
scheme of things) modifications of the rendering state of the core, then 
the management code in the core becomes almost non-existant. That is, it 
can continue to operate and render without ever needing to refer to 
whatever generated it. If the DOM never changes, why should the render 
core need to refer to it every frame? That's an awful lot of extra 
overhead there.

> JC> For SVG, SMIL is just another part of the rendering engine too. It is 
> JC> identical to our event model, in that it is responsible for 
> JC> synchronizing multiple different content streams over a period of time.
> 
> It does that, but it does more than that. It takes the output of the
> CSS engine and applies a series of modifications before the result
> gets rendered. It has a 'sandwich' (kind of a stack) of persistent
> changes, so that animation of properties can also take into account
> asynchronous changes of the DOM tree underneath it.

Sure. I'll have to defer on that point. I've used SMIL as an end user a 
bit but have never implemented a rendering engine that is driven by SMIL 
definitions. (FWIW, we're also getting some comments from one of our 
customers to work in SMIL support to X3D too! ARRGGGH! Too many things!)

> JC> rendering engine need to know that the start command came from a SMIL 
> JC> document, from a mouse click on a menu item or a keyboard accelarator 
> JC> keystroke. Start, stop, that's all you need to care about.
> 
> If only it were that simple.

Sure. I was just hand waving to illustrate the conceptual point. That 
point is that the render engine goes on its own merry way maintaining 
its constant state until prodded from the outside. Consider it Newton's 
3rd law applied to software. SMIL just becomes another thing that can 
upset the constant state that the renderer wishes to remain it.

> JC> If yes, we set up mappings from the 
> JC> DOM to our scene graph - we attach DOM EventListeners to the fragment we 
> JC> are working from and pass the information through to the render core 
> JC> when we see them. Right now, for the externally generated DOM, we don't 
> JC> pass events back out that change within the scenegraph internals.
> 
> Ah. See, SVG needs to do that all the time. A given, likely
> discontinuous piece of geometry needs to be linked back to the node(s)
> in the DOM so that events move correctly in the capture and bubble
> phases.

But do they? Apply the old cliche - If a tree falls in the forest and 
there is nobody around, does it make a sound? If you have no listeners 
on your DOM, do you need to propogate that event into the DOM from the 
renderer? Just like physics, if you observe the action, you change the 
state of the system. That's the way our DOM implementation works. If 
nobody is listening, we never pass the events through. That leaves us 
extra CPU head room to do stuff like render faster (DOM event 
propogation is a really nasty thing in performance/efficiency terms and 
doesn't translate across to realtime rendering systems at all well). So, 
from a comformance perspective, as soon as you attach a listener to the 
DOM tree to check that events are being propogated, you see them. As 
soon as you remove the listener, we stop sending them. We pass the 
conformance test and you get a nice high-performance, decoupled 
architecture.

So, you say - how do you know if someone is listening or not? Well this 
is where we come back to this architecture of dealing with DOMs from 
various sources. If the DOM comes from our custom source, then we can 
monitor exactly when and what someone is listening for. We wrote the 
code, so we added the hooks for it. That one's easy to deal with. What 
about the case when we are using a user-supplied DOM implementation? In 
this situation we assume that we're part of a mixed-media document 
anyway, and so someone is going to want to know about it. Therefore we 
propogate all events all the time. It has a performance impact, but 
that's not a problem. Our assumption is that if we're given a DOM to 
work with, there is a very high probability that they will be managing 
the rendering updates too - effectively our renderer is embedded in a 
bigger page space and so that pagespace will be managing the repainting 
schedule. Therefore, and performance degredation due to DOM event 
cascades/bubbles are not going to be as much an issue as before. The 
performance bottleneck will not be our renderer, but the application's 
page space update manager.

> OK. In SVG, there are a lot more events - for example all the
> text-related nodes can have insert and select events; any node can
> have mouse hover events, etc.

How do those events get generated in the first place? What piece of code 
determines that "I should propogate a select event into the tree now"?

> JC> explicit model called ROUTEs. So, for us, having a mouse event come down 
> JC> through the DOM would be a very rare event, as it is mostly driven from 
> JC> internally detected sources.
> 
> Yes, that's a significant difference. It makes it very reasonable for
> you to optimize the way you have, because only a few known parts of
> your rendering are event sensitive.

Actually, you'd be surprised how much of the entire scene graph is event 
sensitive. Many different sensors types exist - such as visibilty, 
proximity and collision that are not activated by user input, but still 
generate eents that get sent through the entire scene graph. A fairly 
normal approach is to place a touch sensor on the root node of an entire 
object hierarchy - say a H-anim character. In addition, there is a bunch 
of capture-like semantics in X3D too where you can nest sensors. You 
have to traverse the entire render tree to work out who should receive 
the event. So while we are slightly different in terms of the type of 
events, the amount of information that could traverse up and down the 
tree is close.

> JC> In comparison to the SVG world, there shouldn't be any difference 
> JC> really. Most of the lowest-level stuff is still in hardware, or at least 
> JC>   down in the operating system-specific APIs (at least from JDK 1.3 
> JC> onwards, 1.2 was doing pure-java software rasterisers).
> 
> I must admit that I did not notice any difference in speed of 2D
> operations between 1.2 and 1.3. I tried 1.4 and it was slightly
> faster, but it was a beta and I rolled back to 1.3. I should get the
> release version of 1.4

Err.. next set of comments are Sun JDK implementation-specific. If you 
weren't using the Sun code, then this is probably inconsequential 
because other JVM providers may have implemented the code differently.

1.4 feels a bit faster, some of the time. 1.3 was significantly faster 
than 1.2 if you were doing a lot of image manipulation operations. At 
the time Sun wrote the 1.2 APIs in pure java - right down to their own 
implementaiton of Breshnam line rasterisation. This was a huge source of 
complaints at the time because 1.1 used native APIs and there was quite 
a massive performance impact. Many developers stayed away from 1.2 for a 
long time because of this.

With the move to 1.3 Sun moved their implementation back to using the 
native APIs of the underlying OS. For simple actions like lines and 
circles, there wasn't that much difference, but once you started doing 
heavy work like image convolution operations, there were very 
significant performance increases.

Going to 1.4, I don't believe there are many major performance 
optimisations on the rendering/image manipulation stuff. The major 
performance improvement will be with the use of the new APIs and new 
image formats. In particular, making use of DataBuffers and 
VolatileImage to make sure that a lot of the image loading and then 
manipulation code never makes it to into the Java code will have 
significant performance impacts. There's a nice JMF/J3D demo that 
illustrates this floating around in the bowels of the Swing Connection.

> JC> If you started
> 
> (I assume you mean the BVatik developers by "you")

Yes. Make that a general assumption about all of my mutterings.

> JC> using VolatileImage as the output source, even most of the high-level 
> JC> operations like image transformation and clipping would be done in 
> JC> hardware on the video card.
> 
> Do modern video cards offer bicubic interpolation of images? I was not
> aware of that. Do you have a pointer where I could find out more?

I remember at least a bunch of the 3DLabs cards do. I've got some 
pointers in one of their presentations, but we're under NDA for that. 
Basic hint is stuff mainly about OpenGL 2.0. As for other video cards, I 
haven't been paying attention much over the last six months. Been too 
busy just keeping up with the 3D side of the house.

Hmmmm.... just thinking about it, even if you didn't have it as a native 
operation on the hardware, implementing it as a pixel-shader would be 
pretty trivial. You'd get close to hardware accel, but obviously 
requires per-machine, per-video card setup routines that are well and 
truly outside the "pure java" world.

> There are at least two efforts that I am aware of to try and
> re-use OpenGL hardware acceleration to do SVG rendering speedups.

Oh, that would be nice! Either of them Java based?

> JC> Why do you require sync control?
> 
> I may have expressed it badly. It will need to be notified of all
> changes to that subtree, in addition to being able to make changes to
> that subtree and handling event propagation. An optimization is to
> 'seal off' the root of the subtree and handle even propagation
> itself, only sending events to the main tree that move up past the
> root of the subtree.
> 
> JC> Why can't you just be a good servent of
> JC> the containing application and only update your output when it perceives 
> JC> it is a good time?
> 
> Not sure what you meant by that.

When the SVG renderer is working as a sub-component of a larger page 
space, it will need to react to the page space's repaint requests. The 
containing page-space becomes the arbiter of when the SVG renderer 
should run. It will have its own buffering schemes. That's part of the 
problem that we face with trying to use Batik within an X3D/Java3D 
environment - the J3d renderer has its own timeschedule. We can only 
push a new texture to the screen/video card during the time when J3D 
tells us it is OK to do so. When we get this request, the renderer runs 
around looking for what has changed - it would then call the SVG 
renderer and tell it "render now, in these clip bounds". We honestly 
don't care what happens to the SVG content in between, because it will 
never be visible to the user. If a mouse event is found to happen over 
the top of the SVG content, we'll pass that in by whatever means is 
required (posting an event to the the doc fragment that represents that 
texture probably) However, we do care if we happen to catch the SVG 
renderer mid-frame. That's why we (as any other container application 
would) require complete synchronisation and clock management  over the 
renderer. We'll tell you when is a good time to render, not when you 
think is good.

If you take the reverse situation - an X3D document fragment embedded 
within an SVG parent document, Xj3D will already handle that. The 
toolkit provides APIs and interfaces that allow you to be the clock and 
render controller and synchronisation. If you tell us to paint, we'll 
paint and we won't paint when you don't want us to.

> I understand why you perceive that, but I am not sure its true.
> Certainly there are a bunch of applications that use Batik, and some
> of them are multi-namespace. There are other multi-namespace tools
> such as XSmiles that use other SVG renderers, too, so I am not sure
> that there is anything inherent in the SVG specification itself that
> precludes easy integration of other namespaces.

I agree with you on the second part, and only partially on the first. 
Sure, it is possible to make Batik work in a mixed-content system, but 
it is *hugely* inefficient in doing so. Just the overheads of 
maintaining multiple DOM trees alone is a killer. On top of that, the 
way Batik goes off and creates its own threads and caching mechanisms is 
positively anti-social in bigger application environments. For us, in 3D 
graphics land, that's like the absolute, ultimate turn-off. 3D graphics 
is very rarely multi-threaded - typically limited to 2 threads. The 
reason for this is the need to be able to precisely control system 
resources and synchronisation of state. There are other major turn-offs 
too, like the way image handling is done - either at the transcoder 
level or internally in the rendering process. There's some hideously 
inefficient stuff going on in there. Garbage collection is a realtime 
renderer's worst nightmare.

> On the other hand there is a bunch of stuff that is affected by the
> specs that SVG uses; so to add fooML support to an SVG implementation,
> the implementor needs to look at any new CSS properties it uses, and
> whether they are animatable, and what events are supported and whether
> those events are cancelable, and so forth.

Do you really need to? For example, if I embedded an X3D document within 
SVG, why should you care about what CSS properties I use. If they effect 
me, I'll update my render core, and then I'll wait for you to tell me to 
render. Really, a good component-based API should provide the end user 
of registering a namespace and the renderer for that namespace. When 
content is detected from that namespace in the DOM tree, create an 
instance of the nominated renderer, and then act as a supervisor. Tell 
me when I need to update and then you take care of the image composition 
that needs to be done. I (being some component registered with the batik 
rendering engine) have to conform to your component API, and you spec 
what my interface will be (typically a method that has the rendering 
surface size, the rendering surface, the clip bounds and the current 
time/clock tick as parameters and maybe a rendered image as the output). 
   Since you're the one in control, you the composite my output image 
into the overall output and everyone's happy.


> I meant, there is no particular reason to conclude that Batik is
> "useless" in multi-namespace integration. I wasn't quibbling about the
> value of multi-namespace documents, quite the reverse.

I'll disagree with you on the first point. That's how we got into this 
conversation. I believe, for a particular class of applications, Batik 
is useless. I'm trying to get that fixed :)

> Yes, you can of course modify SVG content through DOM XML APIs and
> people do that all the time. Which is true as far as it goes, but that
> is not very far. in other words, your response to 'there is a whole
> SAVG DOM layered on top of the XML DOM and CSS OM' seems to be 'well I
> don't need to use that personally' which is all very nice but doesn't
> help the developers a great deal.

Why doesn't it help them? It is a loud message to say that there is at 
least one class of applications, and XML-based specifications, that 
really don't care, and probably don't actually want that DOM hanging 
around. If you can build us a toolkit that can throw it away, we'll be 
very happy and will use your project in ours. blah, blah, blah.

> However, the relevance of B.6.2 to Batik is zero; the appropriate
> section is B.6.3

Again, I ask why? I would like to supply you with my user agent that 
does not have CSS support. Why are you not interested in supporting me?

> JC>  Nowhere does it say "the only way to interact with SVG
> JC> content is through the SVG DOM API".
> 
> Nowhere did I say that it did. You seem to be confusing exclusivity
> and existence. The fact that SVG content can be modified through the
> XML DOM does not mean that the SVG DOM is either optional or
> irrelevant. You are putting up a strawman argument and I am not sure
> why.

Que? The whole point of this discussion is batik developers saying "the 
core rendering engine for any SVG renderer cannot exist without the 
existance and very tight coupling to the SVG DOM - and one particular 
implementation of it at that". I'm point out the fallacies in that 
argument. That is hardly a strawman.

> Notice the third bullet point under 'Specific criteria that apply to
> only Conforming Dynamic SVG Viewers': "The viewer must have complete
> support for an ECMAScript binding of the SVG Document Object Model."

Yes, and that has nothing to do with the rendering engine. What defines 
a viewer? A viewer must support it - it in no way implies that the 
viewer must supply it. Two very different concepts. If I pass you a DOM 
that is prebuilt and does not contain the SVG-DOM feature, you can still 
be conformant viewer. You have to support it, not implement it.

Ah, you say, but what about the "ECMAscript binding" bit? If I don't 
implement that, then how can we be conformant? Well, again Xj3D can be a 
case study for you. Our scripting engines, again, a required feature by 
the specification, do not work inside the DOM. They are another adjunct 
to the rendering engine. When I fire up the Context for Rhino, I pass it 
in a collection of prebuilt objects. At the root, is one of those nodes 
that the ECMAScript code is looking for. As an implementation of the 
Context, when I get an ECMAScript [[get]] action, I run off to the 
rendering engine and attempt to find what you are talking about. Then, a 
bunch of wrapper objects are created as you start walking the object 
heirarchy. Each [[get]] request (which manifests itself as the void 
get(int, Scriptable) method of the Scriptable interface) just goes back 
to the rendering engine for the right information. Any writes to those 
objects get propogated into the rendering engine core and then the 
script exits. At no stage does the script ever use or touch the 
"external" DOM. Even though the script is defined inside that DOM, it 
operates in its own separate context that lives off the side of the 
render engine. In this way, you can have an externally provided DOM that 
does not implement the SVG feature set, and yet still have scripting 
that is conformant.

> I'm tempted to draw your attention to the words "conforming", "must"
> and "complete" but I assume you already know what words in the
> English language mean, as do I.

Yup, and you don't seem to know how to read specs and weasel out of them 
:) In all honestly, I've written one ISO spec, reviewed a couple of 
others (MPEG-4 being the most well-known) and contributed huge numbers 
of reviews and modifications to the main VRML spec. I understand 
implicitly what words of a specification say and mean. I also understand 
what they don't say. Building useful toolkits are as much about reading 
what was not said in the specification as reading the things that are 
said. I believe the Batik developers are concentrating too much on the 
later and not enough on the former. If I could get you to concentrate 
more on the former and make those architectural changes then you would 
still be completely conformant and be much more useful to a wider audience.

> Its not clear why the code would be implemented twice, other than to
> prove a point, as one moves to a Dynamic viewer with CSS, SMIL
> animation, say perhaps some animation of the width of the rectangle
> and a :hover style on the stroke width, the amount of the renderer
> that has to be duplicated increases dramatically, at which point you
> might as well call it a renderer.

Not at all. Think of a null renderer as a piece of code that provides 
the basic data structures for all other renderers. There's no need to do 
all the field management stuff twice, so just combine it into a common 
set of base classes. Or, as we're starting to do with the Xj3D project - 
the Java3D renderer extends the basic null renderer to include output 
capabilities. There's no need to duplicate the basics, just add in the 
bits that are specific to your particular output device. We do this in 
Xj3D. Our null renderer still runs just like a real application - 
including realtime modifications and scripting. Because we've decoupled 
the various parts of the architecture, the scripting engine doesn't care 
how the objects got loaded, just that they did. It doesn't care what the 
renderer is either. It just works. Looking through the SVG spec, there's 
not that much that you need to do. Bounding boxes are trivial to 
calculate without needing a renderer. Images have their information 
supplied, various shapes do, all you really need to do is work out the 
total glyph bounds for the text items, and that can be trivially done 
without needing a renderer too.

> JC> No. They are as intimate as the source materail allows. See the section 
> JC> above. If intimacy is permitted, make use of it. If given the cold 
> JC> shoulder, then live with it and get on with the rendering.
> 
> Its not clear what that glib statement really means, in the context of
> Conforming Dynamic SVG Viewer.

See point above. Conforming can be implemented and provided with any 
user-given DOM, regardless of whether that DOM supports the SVG features 
or not.

>>>That seem to me to be saying that in the case of inlined SVG
>>>fragments, the implementation would be non conformant. I guess that is
>>>not very interesting to the Batik developers.
>>
> 
> JC> Any application, at any one time will not be conformant.
> 
> Hmm, it becomes increasingly hard to have a logical conversation about
> the interoperability of X3D and SVG at the specification level (my
> primary interest) and at the specific implementation level (also of
> interest) if you reserve the right to ignore any parts of the spec
> that you don't like when you start to loose parts of an argument.

Eh? I'm not ignoring it at all. I'm pointing out there are various ways 
of interpreting the written word. There are also times where some users 
may not care about conformance, or they only want a subset of 
conformance. Say batik says "I'm implementing a Dynamic SVG Viewer", I 
want to come along and say "no, I don't want you to be a dynamic viewer, 
static only is fine please".


> JC> Inlined
> JC> content is no different. It is just part of a bigger structure. You can 
> JC> still be entirely conformant, but also working in unison with other 
> JC> players on the same page.
> 
> Conformance levels and multi-namespace integration are entirely orthogonal.

I disagree, for the reasons stated earlier - the Batik developers are 
saying that I cannot have mixed-content integration, because then they 
can't be conformant. Which one is it to be?


> JC> But that's what I explicitly trying to avoid! I don't want to clone the 
> JC> SVG tree. That's yet another piece of my memory you are consuming for 
> JC> your own greediness. I want you to use my DOM, and my DOM only.
> 
> Using your DOM is reasonable, using your DOM only when it does not
> have the required functionality is a different matter entirely.

Not really. See points above. It doesn't have to have the required 
functionality. It can still stay conformant.

> JC> How rude of you to expect that we all feel like tossing all this
> JC> extra memory in your directions when we already have a perfectly
> JC> good copy floating around in memory already.
> 
> I am compelled to point out that someone who initiates a conversation
> by remarking that an entire implementation is a PoS has abrogated
> their right to accuse anyone else of being rude.

Well I must have missed a smiley there somewhere then. I shay, that wash 
a joke shon, a joke! In addition, calling something crap is not being 
rude, it is an opinion. If you are offended by an opinion, so be it. 
Thicker skins are needed. I'm not going to retract that, and as we go 
further through this discussion it only confirms my original opinion. If 
someone took a look at my code and called it crap, I would love that! it 
means that there is a user that is prepared to look at it, weigh it up 
and then make an opinion. If I can get some reasoning out of that, then 
it will only make my code better. This is a public invite for all of you 
to take a look at any and all of my public code, which has to be well 
over a half a million lines by now, and give me a technical review of 
what I've done. Tell me honestly what you think of it. There's at least 
one piece of code that is central to this discussion now. Go read, 
comment, try to understand, offer me an opinion. Good or bad, I don't 
care. If you only ever want to hear good news, I might as well leave 
now, because I ain't gonna offer it with the code/project in the state 
it is. At least I'm not like most people - I'm here discussing stuff 
trying to get improvements made, rather than just dissing it loudly and 
often in a public forum and not being perpared to do something about it.

> So if you have an XML document and 5% of it is little SVG sprinkles
> for texture etc then deep cloning is an efficient and timely way of
> handling things.
> 
> If you have an XML document and 80% of it is SVG then deep cloning is
> wasteful of memory. 

Knowing apriori is a wonderful thing. We never know that, so we build 
the toolkit appropriately. All I can say today is that deep cloning, 
even for the 5% case is extremely wasteful of both memory and other 
resources, just as much as the 80% case.

> JC> settles down, it shouldn't been too hard to then catch up. Effectively, 
> JC> anything XML has been put on the backburner, hence we haven't gotten 
> JC> around to work with sending internal rendering events back out to the 
> JC> DOM when the DOM is user supplied.
> 
> It will be interesting to hear about your experiences with the topics
> discussed in this thread once you are further on the road towards
> implementing them.

The only thing that isn't implemented on our XML side currently is the 
talkback from the rendering core to a user-supplied DOM. It's actually 
quite trivial to code up, because all the structures are in place. All 
the rest of our DOM implementation is complete and works very well with 
the core and mixed access modes and content models (for example, we can 
have someone parse a VRML-classic encoding file and present that as a 
DOM to an outside user. What we haven't done is updated the DOM-Extended 
API (Called SAI, which is equivalent to SVG-DOM) to the latest versions 
of the specification. The "old version" works fine with the core and all 
that, just if someone tried to compile against the new libs, they'd get 
errors due to slightly different structures. The main issue is just 
which tags are in the content model. Hence, I didn't think it was worth 
my brain context-swapping to build the integration code when there were 
other more outstanding issues to be dealt with.

As for the mixed-content handling, well we're at an impass currently. We 
wanted to work on that, but the SVG toolkits that we can find won't work 
in such a mode in a way that would render anything faster than a beetle 
crawling across the page. We'd like to fix that if we can. However, we 
might have to go find another spec to play with where the toolkits are a 
bit better organised or drop the thoughts entirely.

-- 
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Humanism is dead. Animals think, feel; so do machines now.
Neither man nor woman is the measure of all things. Every organism
processes data according to its domain, its environment; you, with
all your brains, would be useless in a mouse's universe..."
                                               - Greg Bear, Slant
-------------------------------------------------------------------


---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org


Mime
View raw message