buildr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Assaf Arkin" <ar...@intalio.com>
Subject Re: request for enhancement: compile, package and artifacts support for C++
Date Wed, 30 Jul 2008 18:08:11 GMT
On Wed, Jul 30, 2008 at 12:18 AM, Ittay Dror <ittay.dror@gmail.com> wrote:
> Thank you for your reply and patience.
>
> I now understand what you meant, and you are quite right, it can be done
> this way.
>
> However, my aim was to create the task prerequisites tree before rake
> invokes the first task.
>
> First, it will make '-P' show the tree (according to your suggestion, -P
> won't show that 'compile' depends on 'libsomething.so' and 'libsometing.a',
> right). Secondly, having a complete tree of all tasks and prerequisites
>  allows to analyze it

It can build the tree during the definition or in after_define, as
long as it's only pointing to things it know exist.  You probably want
to delay most of that work into after_define, so the definition can
add/change stuff incrementally.  For example, you can never know when
javah will be used during the definition to add another includes
directory, but you can specify that it's a definition method, so by
the time you get to after_define, if javah is used on this project it
would have already.

Assaf

>
> Both these reasons are non-functional of course.
>
> Ittay
>
> Assaf Arkin wrote:
>>
>> On Tue, Jul 29, 2008 at 12:59 PM, Ittay Dror <ittay.dror@gmail.com> wrote:
>>
>>>
>>> can you give an example of how a task can orchestrate other tasks? also,
>>> as
>>> far as i could tell, the 'compile' method always create a CompileTask. i
>>> can't use it as is because it expects some compiler which i can't give it
>>> because i want to use tasks and also, i can't add dependencies to it
>>> because
>>> it depends directly on tasks like 'resources' which the prerequisites
>>> should
>>> depend on.
>>>
>>
>> If you look at the end of compile.rb you'll notice one of the things
>> it does is call  project.recursive_task('compile') which causes one
>> project's compile task to execute all its child projects's compile
>> tasks.  Likewise, if you look at test.rb at the very end, you'll
>> notice that it's tacking the test task to the very end of the build
>> task (always test after build).
>>
>> Another example is the XMLBeans task (in addon) which needs to
>> generate source code, that is added as prerequisite to compile, and
>> also copy files over to the target directory, which is done by the
>> compile task at the very end.
>>
>> From the compiler you can do whatever you need to, including invoking
>> as many tasks as necessary (let Rake worry whether to execute them or
>> not).  And like XMLBeans does, you can add additional prerequisites
>> when necessary, and make additional work happen after compilation.
>>
>>
>>>
>>> At the risk of spending a lot of time on the obvious (i have a feeling
>>> we're
>>> talking about different things):
>>>
>>> say a project has 2 cpp files A.cpp and B.cpp, with matching headers, and
>>> no
>>> other headers, which compile to shared and static libraries. my
>>> dependency
>>> tree is:
>>>
>>> compile:cpp ----- libsomething.so --- A.o --- A.cpp
>>>                   \                                \    /        \ A.h
>>>                    \                                 X
>>>           \                              /    \ B.o --- B.cpp
>>>                      \- libsomething.a/-----/          \ B.h
>>>
>>>
>>> these should be rake tasks for two reasons: timestamp checking and the
>>> fact
>>> that two artifacts rely on the same set of objects. also linking and
>>> compiling are two different commands and finally, if i call the compiler
>>> twice, it will do the work twice (that is, it doesn't have any internal
>>> mechanism that tells it there's no need to recreate the obj files or
>>> libraries).
>>>
>>
>> Yes.  If all these are separate tasks wired together, than Rake will
>> only compile what is necessary.  So let's say you have two tasks, just
>> to simplify (they have other prerequisite tasks), one for
>> libsomething.so and one for libsomething.a.  You have a compile task
>> that invokes these two tasks.  Rake only executes what is necessary by
>> checking dependencies on the object files, which in turn check
>> dependencies on the cpp and header files, etc.
>>
>> So now you have one forest of dependencies in the project, all of
>> which are executed as necessary by the project's compile task.  And
>> one forest of projects, all of which are also executed as necessary by
>> the project's compile task.
>>
>> Your compiler object now has two uses:
>> a) It makes sure all these tasks exist and get invoked.  There's no
>> need for it to run a single instance compiler on all the files.  We do
>> that for Javac because it's Javac, but the compile method can do
>> whatever it deems necessary.
>> b) You get an easy way to control compiler options across all of
>> these, and inherit them from parent projects.  So you could, say, pick
>> the target architecture in the top-level project, have all the
>> compilers inherit from it.
>> c) Your compiler can run all these tasks in parallel.
>>
>> And since libsomething.so is also a task, if you want you can control
>> some of these options directly on that task.
>>
>>
>>>
>>> note that all of this tree needs to rely on the 'resources' task, since
>>> some
>>> headers may be generated. so 'resources' need to run before all the
>>> timestamp checking and compilation is done.
>>>
>>
>> The resources task is specifically for copying files to the target
>> directory that are not handled by the compiler, like images, I18N
>> resources, configuration files, etc.  It's not for generating code
>> used during compilation.
>>
>>
>>>>>
>>>>> of course the factory method can create just one task that does all the
>>>>> rest
>>>>> in its action (compile obj files and link), but i do want to use tasks
>>>>> for
>>>>> the following reasons:
>>>>> 1. it makes the logic more like make, which will assist acceptance
>>>>> 2. it can use mechanisms in unix compilers to help make. specifically,
>>>>> most
>>>>> (if not all) unix compilers have an option to spit out dependencies of
>>>>> the
>>>>> source files on headers.
>>>>> 3. it reuses timestamp checking code in rake (and if ever rake
>>>>> implements
>>>>> checksum based recompilation)
>>>>> 4. if rake will implement a job execution engine (like -j in make),
>>>>> then
>>>>> structuring compilation by tasks will allow it to parallelize the
>>>>> execution.
>>>>>
>>>>> but, i think the solution is easy: similar to the 'build' "pseudo
>>>>> task",
>>>>> i
>>>>> can create a 'compile:prepare' pseudo task that depends on 'resources'
>>>>> etc.
>>>>> then, the factory method needs only to depend on 'compile:prepare' (the
>>>>> logic is that another extension can then add other things to do before
>>>>> compile without needing to change the compile extensions)
>>>>>
>>>>>
>>>>
>>>> We had compile:prepare in the past which invokes resources and ...
>>>> well, that's about it.  It turns out that just having compile and
>>>> doing everything else as prerequisite is good enough.
>>>>
>>>>
>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> package & artifacts
>>>>>>> =========
>>>>>>> overview
>>>>>>> ---------------
>>>>>>> buildr has a cool concept that all dependencies (in 'compile.with')
>>>>>>> are
>>>>>>> converted to tasks that are then simple rake dependencies. However,
>>>>>>> the
>>>>>>> conversion is not generic enough. to compile C++ code against
a
>>>>>>> dependency
>>>>>>> one needs 2 paths: a folder containing headers and another containing
>>>>>>> libraries. To put this in a repository, these need to be packaged
>>>>>>> into
>>>>>>> one
>>>>>>> file. To use after pulling from the repository, one needs to
unpack.
>>>>>>> So
>>>>>>> a
>>>>>>> task representing a repository artifact is in fact an unzip task,
>>>>>>> that
>>>>>>> depends on the 'Artifact' task to pull the package from a remote
>>>>>>> repository.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Let's take Java for example, let's say we have a task that depends
on
>>>>>> the contents of another WAR.  Specifically the classes (in
>>>>>> WEB-INF/classes) and libraries (WEB-INF/lib).  A generic unzipping
>>>>>> artifact won't help much, you'll get the root path which is useless.
>>>>>> You need the classes path for one, and each file in the lib (pointing
>>>>>> to the directory itself does nothing interesting).  It won't work
with
>>>>>> EAR either, when you unzip those, you end up with a WAR which you
need
>>>>>> to unzip again.
>>>>>>
>>>>>> But this hypothetical task that uses WAR could be smarter.  It
>>>>>> understands the semantics of the packages it uses, and all these
>>>>>> packages follow a common convention, so it only needs to unpack the
>>>>>> portions of the WAR it cares about, it knows how to construct the
>>>>>> relevant paths, one to class and one to every JAR inside the lib
>>>>>> directory.
>>>>>>
>>>>>> I think the same analogy applies to C packages.  If by convention
you
>>>>>> always use include and lib, you can unpack only the portion of the
>>>>>> package you need, find the relevant paths and use them appropriately.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> (note: not sure i'm following you here. )
>>>>>
>>>>>
>>>>
>>>> Artifacts by themselves are a generic mechanism for getting packages
>>>> into the local repository.  Their only responsibility if the artifact
>>>> and its metadata, so a task representing a repository artifact would
>>>> only know how to download it.
>>>>
>>>> You can have a separate task that knows how to extract an artifact
>>>> task and use it instead, that way you get the unpacking you need, but
>>>> not all downloaded artifacts have to be unpacked.
>>>>
>>>>
>>>
>>> yes, this is what i'm currently doing, as i explained below.
>>>
>>> but what i want is for me to be able to do that by integrating with the
>>> existing 'artifacts' task. right now it will only return Artifact
>>> objects.
>>> I'd like to have a more elegant solution than just to run over them and
>>> create my own objects, which i think will be more tricky with transitive
>>> dependencies (where transitivity may come from my artifacts, e.g. the
>>> project's artifacts)
>>>
>>>>
>>>>
>>>>>
>>>>> my current implementation creates classes that have methods to retrieve
>>>>> the
>>>>> include paths, the library paths and the library names. I don't use the
>>>>> task
>>>>> name, since it is useless (as you mentioned). so I have an
>>>>> ExtractedRepoArtifact FileTask class that implements these methods by
>>>>> relying on the structure of the package ('include' and 'lib'
>>>>> directories),
>>>>> it depends on the Artifact class and its action is to extract the
>>>>> artifact.
>>>>>
>>>>> When given a project dependency, i return the build task which
>>>>> implements
>>>>> the artifact methods mentioned above by returning the
>>>>> [:source,:main,:include] and [:target, Platform.id, :lib] paths. It
>>>>> also
>>>>> allows the user to add include paths (e.g., for generated files) which
>>>>> are
>>>>> then both used for compilation and returned by the artifact methods.
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> furthermore, when building against another project, there is
no need
>>>>>>> to
>>>>>>> pack
>>>>>>> and unpack in the repository. one can simply use the artifacts
>>>>>>> produced
>>>>>>> in
>>>>>>> the 'build' phase of the other project.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Yes.  Right now it points to the package, which gets invoked and
so
>>>>>> packs everything, whether you need the packing or not.  You don't,
>>>>>> however, have to unpack it, if you know the packaging type you can
be
>>>>>> smarter and go directly to the source.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> but i don't want to pack if there's no use for it. speed is critical
in
>>>>> this
>>>>> project, since there's no eclipse to constantly compile code for you,
>>>>> so
>>>>> developers need to run the build after each change. having it pack
>>>>> unnecessarily wasts time.
>>>>>
>>>>>
>>>>
>>>> One step at a time.  I would worry if we can't do that at all, but if
>>>> it's just optimization, we can get to the more problematic issues
>>>> first.
>>>>
>>>>
>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> finally, in C++ in many cases you rely on a system library.
>>>>>>>
>>>>>>> in all cases the resulting dependency is two-fold: on a include
dir
>>>>>>> paths
>>>>>>> and on a library paths. note that these do not necessarily reside
>>>>>>> under
>>>>>>> a
>>>>>>> shared folder. for example, a dependency on another project may
>>>>>>> depend
>>>>>>> on
>>>>>>> two include folders: one just a folder in the source tree, the
other
>>>>>>> of
>>>>>>> generated files in the target directory
>>>>>>>
>>>>>>> suggestion
>>>>>>> -------------------
>>>>>>> While usage of Buildr.artifacts is only as a utility method,
so one
>>>>>>> can
>>>>>>> easily write his own implementation and use that, I think it
will be
>>>>>>> nice
>>>>>>> to
>>>>>>> be able to get some reuse.
>>>>>>>
>>>>>>> * when given a project, use it as is (not 'spec.packages'), or
allow
>>>>>>> it
>>>>>>> to
>>>>>>> return its artifacts ('spec.artifacts').
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Yes.  Except we're missing that whole dependency later (that's
>>>>>> something 1.4 will add).  Ideally the project would have dependency
>>>>>> lists it can populates (at least compile and runtime), and other
>>>>>> projects can get these dependency lists and pick what they want.
 So
>>>>>> the compile dependency list would be the place to put headers and
>>>>>> libraries, without having to package them.  We don't have that right
>>>>>> now.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> this is the purpose for the 'spec.artifacts' suggestion (that is, an
>>>>> 'artifacts' method in Project). maybe need to classify them similarly
>>>>> to
>>>>> my
>>>>> suggestion for 'compile', so the Buildr.artifacts method receives a
>>>>> 'classifier' argument, whose value can be, for example,  'java' and
>>>>> calls
>>>>> 'spec.artifacts(classifier)'. are we on the same page here?
>>>>>
>>>>>
>>>>
>>>> I'm looking at each of your use cases and trying to identify in my mind:
>>>> a)  What you can do right now to make it happen.
>>>> b)  What, if we added another feature, we should accommodate for.
>>>> c)  What new feature we would need for this.
>>>>
>>>> I'm starting with a) because you can get it working right now, it may
>>>> not be elegant and not work as fast, but we can get that out of the
>>>> way so we can focus about doing the rest.  There are some things we're
>>>> planning on changing anyway, so I'm also trying to see if future
>>>> changes would address the elegant/fast use cases, I can tell you what
>>>> I have in mind, but no code yet to make it happen.  And then identify
>>>> anything not addressed by current plans and decide how to support that
>>>> directly.
>>>>
>>>>
>>>
>>> i got it working now. but i'm doing several code paths in parallel. i
>>> have a
>>> 'make' method instead of 'compile'. the reason are both because i need to
>>> create several tasks, not a 'compiler' object (and i want to create them
>>> before rake's execution starts) , and because i need to create different
>>> implementations per platform.
>>>
>>>>
>>>> Right now, project.packages is good enough for what you need.  It's an
>>>> array of tasks, you can throw any task you want in there and the
>>>> dependent project would pick on it.  You don't have to throw ZIP files
>>>> in there, you can add a header file or a directory of header files, or
>>>> a task that knows it's a directly of header files.
>>>>
>>>> It's inelegant because project.packages is intent to be the list of
>>>> things that get installed and released, so it's an "off the label" use
>>>> for that part of the API.  But, it will work, and if you just add
>>>> things to the end of project.packages, they won't get installed or
>>>> released.  So project.packages is that same as project.artifacts, just
>>>> with a different name.
>>>>
>>>>
>>>
>>> or i can implement my own 'artifacts' method, which is what i did because
>>> i
>>> need different artifact objects than what Buildr.artifacts returns.
>>>
>>>>
>>>> Separately, we need (and planning and working on) a smarter dependency
>>>> management, which you can populate and anything referencing the
>>>> project can access.  It won't be called artifacts but dependencies, it
>>>> will do a lot more, and it will be more elegant and documented for
>>>> specific use cases like this.
>>>>
>>>>
>>>>
>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> * if a symbol, recursively call on the spec from the namespace
>>>>>>> * if a struct, recursively call
>>>>>>> * otherwise, classify the artifact and call a factory method
to
>>>>>>> create
>>>>>>> it.
>>>>>>> classification can be by packaging (e.g. jar). but actually,
i don't
>>>>>>> have
>>>>>>> a
>>>>>>> very good idea here. note that for c++, there need to be a way
of
>>>>>>> defining
>>>>>>> an artifact to look in the system for include files and libraries
>>>>>>>  (maybe
>>>>>>> something like 'openssl:system'? - version and group ids are
>>>>>>> meaningless).
>>>>>>>  * the factory method can create different artifacts. for c++
there
>>>>>>> would
>>>>>>> be
>>>>>>> RepositoryArtifact (downloads and unpacks), ProjectArtifact (short
>>>>>>> circuit
>>>>>>> to the project's target and source directories) and SystemArtifact.
>>>>>>>
>>>>>>> I think that the use of artifact namespaces can help here as
it
>>>>>>> allows
>>>>>>> to
>>>>>>> create a more verbose syntax for declaring artifacts, while still
>>>>>>> allowing
>>>>>>> the user to create shorter names for them. (as an example in
C++ it
>>>>>>> will
>>>>>>> allow me to add to the artifact the list of flags to use when
>>>>>>> compiling/linking with it, assuming they're not inherent to the
>>>>>>> artifact,
>>>>>>> e.g. turn debug on). The factory method receives the artifact
>>>>>>> definition
>>>>>>> (which can actually be defined by each plugin) and decides what
to do
>>>>>>> with
>>>>>>> it.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> 1.4 will have a better dependency mechanism, and one thing I looked
at
>>>>>> is associating meta-data with each dependency.  So perhaps that would
>>>>>> address things like compiling/linking flags.
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> ordering
>>>>>>> =========
>>>>>>> overview
>>>>>>> -------------------
>>>>>>> to support jni, one needs to first compile java classes, then
run
>>>>>>> javah
>>>>>>> to
>>>>>>> generate headers and then compile c code that implements these
>>>>>>> headers.
>>>>>>> so
>>>>>>> the javah task should be able to specify it depends on the java
>>>>>>> compile
>>>>>>> task. this can't be by depending on all compile tasks of course
or on
>>>>>>> 'build'.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Alternatively:
>>>>>>
>>>>>> compile do |task|
>>>>>>  javah task.target
>>>>>> end
>>>>>>
>>>>>> This will run javah each time the compiler runs.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> but running each time is what i want to avoid. not only do i want to
>>>>> avoid
>>>>> the invocation of 'javah', but when invoked it will change the
>>>>> timestamp
>>>>> of
>>>>> the generated headers and so many source files will get recompiled.
>>>>>
>>>>>
>>>>
>>>> Rake separates invocation from execution.  Invoking a task tells it to
>>>> invoke its prerequisites, then use those to decide if it needs
>>>> executing, and if so execute.  Whether you put javah at the end of
>>>> compile, or a prerequisite to build, it will get invoked and it should
>>>> be smart enough to decide whether there's any work to be done.
>>>>
>>>>
>>>
>>> i think i'm missing something here. in the code snippet above, didn't you
>>> add an action to 'compile' and in that action call the javah command? to
>>> me
>>> it looks like at the end of compile javah is run.
>>>
>>>>
>>>> But there is a significant difference between the two.  If you add it
>>>> to compile, it gets invoked during compilation -- and compilation
>>>> implies there's a change to the source code which might lead to change
>>>> in the header files -- and that happens as often as is necessary.  If
>>>> you put is as prerequisite to build, it only happens when the build
>>>> task runs.  If you run rake task, which doesn't run the build task,
>>>> you may end up testing the wrong header files.
>>>>
>>>>
>>>
>>> there should be a rule to the effect of:
>>> jni_headers_dir => [classes] do |task|
>>>  javah classes # with whatefer flags to put generated headers in
>>> jni_headers_dir
>>>  touch jni_headers_dir
>>> end
>>>
>>> so if the classes are newer than the directory (and only then) javah
>>> runs.
>>> if i run it every time it will generate headers, changing the timestamp,
>>> which will cause all dependent cpp classes to recompile which will take a
>>> lot of time.
>>>
>>
>> Again, if you do:
>>
>> compile do
>>  file(jni_headers_dir).invoke
>> end
>>
>> It gives you the same effect, except it happens earlier in the process
>> (e.g. before test, not just before build).  You invoke the task, the
>> task looks at the prerequisites, decides if anything needs to be done,
>> and executes only when necessary.
>>
>> Assaf
>>
>>
>>>>
>>>>
>>>>>
>>>>> note that compiling a C/C++ source file is a much slower process than
>>>>> compiling java.
>>>>>
>>>>>
>>>>>>>
>>>>>>> suggestion
>>>>>>> -------------------
>>>>>>> when creating a compile task (whose name can be, as in the case
of
>>>>>>> c++,
>>>>>>> the
>>>>>>> result library name - to allow for dependency checking), also
create
>>>>>>> a
>>>>>>> "for
>>>>>>> ordering only" task with a symbolic name (e.g., 'java:compile')
which
>>>>>>> depends on the actual task. then other tasks can depend on that
task
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> And yes, you'll still need that if you want to run the C compiler
>>>>>> after the Java compiler, so I think the right thing to do would have
>>>>>> separate compile tasks.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I hope all this makes sense, and I'm looking forward to comments.
I
>>>>>>> intend
>>>>>>> to share the code once I'm finished.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Unfortunately, the last time I wrote C code was over tens years ago,
>>>>>> so my rustiness is showing.  I'm sure I missed some points because
of
>>>>>> that.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> I hope I cleared things. I think it is worth investing in C/C++ as it
>>>>> is
>>>>> a
>>>>> space where there's still no solutions (that i know of) that handle
>>>>> module
>>>>> dependency.
>>>>>
>>>>>
>>>>
>>>> Definitely.
>>>>
>>>>
>>>>
>>>>>
>>>>> To make sure it is clear, I'm not asking for the buildr team to
>>>>> implement
>>>>> C/C++ building, I intend to do that, and have already made a demo of
it
>>>>> working, but I do want to ask for the infrastructure in buildr to make
>>>>> it
>>>>> easier, since currently it looks like a "stepson".
>>>>>
>>>>>
>>>>
>>>> In addition, two things we should look at.
>>>>
>>>> First, find out a good intersection between C/C++ and other languages.
>>>>  There may be some changes that are only necessary for C/C++, but
>>>> hopefully most of these can be shared across languages, that way we
>>>> get better features all around.
>>>>
>>>> Second, make sure we exhausted all our options before making a change.
>>>>  If there's another way of doing something, even stop-gap measure
>>>> while we cook up a better feature all around, then we have less
>>>> changes to worry about.
>>>>
>>>> It's an exercise we did before with Groovy and Scala (earlier versions
>>>> were married to Java) and it worked out pretty well.  We started by
>>>> not making any changes in Buildr to accommodate it, instead using a
>>>> separate task specifically for compiling Scala code that relied on
>>>> some hacks and inelegant code to actually work.  Then took the time to
>>>> build multi-lingual support out of that.
>>>>
>>>>
>>>
>>> i'm already past that. i have ~20 modules compiling, with transitive
>>> dependencies on other modules and on third party modules.
>>>
>>> so i'm now at a stage where i want better integration with buildr.
>>>
>>>>
>>>> Assaf
>>>>
>>>>
>>>>
>>>>>
>>>>> Ittay
>>>>>
>>>>>
>>>>>>
>>>>>> Assaf
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Ittay
>>>>>>>
>>>>>>>
>>>>>>> Notes:
>>>>>>> [1] I don't consider linking a library as packaging. First, the
obj
>>>>>>> files
>>>>>>> are not used by themselves as in other languages. Second, packaging
>>>>>>> is
>>>>>>> required to manage dependencies, because in order for project
P to be
>>>>>>> built
>>>>>>> against dependency D, D needs to contain both headers and libraries
-
>>>>>>> this
>>>>>>> is the package.
>>>>>>>
>>>>>>> --
>>>>>>> --
>>>>>>> Ittay Dror <ittay.dror@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> --
>>>>> Ittay Dror <ittay.dror@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>> --
>>> --
>>> Ittay Dror <ittay.dror@gmail.com>
>>>
>>>
>>>
>
> --
> --
> Ittay Dror <ittay.dror@gmail.com>
>
>

Mime
View raw message