From: Guy Barrand Subject: Re: Meeting Wednesday afternoon 13/4 at 14:00 Date: Tue, 19 Apr 2005 14:03:25 +0200 To: Pere Mato Vila , Philippe Charpentier Cc: lhcb-core-soft ((LHCb CORE Software)) , lhcb-paste@cern.ch>,lhcb-reconstruction@cern.ch, lhcb-davinci@cern.ch,lhcb-comp@cern.ch Hello Pere, Philippe and all. A long mail to sumup my views on what had been presented and discussed at the "LCG-AA-internal-review versus LHCb" special meeting of last Wednesday afternoon. This mail is a little bit delayed because it took me some time to write things down. I took permission to post on the same mailing lists than the annoucement of the meeting since this "LCG-AA-internal-review versus LHCb" is not only pure core software but contains also a lot of sociology. LHCb and LCG/SPI : ---------------- Clearly, the frequency of remote people complaining that they can't install Panoramix locally starts to raise. To move, I propose then to have some "display distribution challenge" to demonstrate that we can do it for a remote platform covering the tree expected common operating system and desktops (then Linuxes, Windows, MacOSX). I propose the display, not because I consider it as the most important application for physics, but because technically it is the defacto biggest application in the sense that it embarques everything (it may also embarque the simulation) and also because it has to deal with the local graphics and desktop resources of the machines ; a constraint that the batch applications don't have. The challenge would consist, with a minimum number of clicks (or commands), to be able to a new comer or a f77-oriented-senior physicist (use cases exist) to : - dowload - (build) - configure - run with a default configuration - display some pieces of detector - open a file with the mouse - survive to a couple of "next event" If we can do that, we would have solved a lot of problems. And this is now no more a dreaming wish. I have requests now. Not clear to me if it is purely in the hand of SPI. The part to decide of what goes in an "integration of everything" at a given moment is definitely in the hand of LHCb. But when things are tagged in the repositories and the list of what have to go in the integration is done ; somewhere it seems to me that the "machinery" to : - build the binaries for the common platforms - put things at disposal - and why not trigger on the local remote machines the download/configure should finish to be automatic. LHCb and the reaction : SEAL+ROOT -> ROOT : ----------------------------------------- With ROOT, then it seems that there are hints that we may getting out of the unacceptable situation of "take it or leave it" that predominated under the previously ROOT-biased LCG/AA direction. Fine. Progress. "Take it and... shut up !" would have been perhaps acceptable for a perfect software that would have induced no extra burden on the back of people. But with more than 1500 entries reported in ten years in the ROOT bug system and more than 180 entries today open, ROOT, as it is today, is far from being a perfect software. And people that had jumped in the internals know that. This situation of "take it or leave it" was particularly upsetting knowing that it came from people being housed at CERN ; the place which is for me here to federate the engineering efforts and to help to "put things together" to run experiments ; then a place where someone expect a minimum of quality in how things are done. If the mentalities had evolved it would be a huge progress. I wish really that this touchy point that lead to nasty and painfull sociology is behind us. But I am obviously highly skeptical and wait eagily the results in the code. 1) dictionnary : -------------- This is probably the best short term "testing ground" to feel that something really happened with the ROOT team. The dictionary/introspection problem is now a clearly well defined problem. For people that did not follow the thread, it deserves an explanation ; C++ clearly lack a strong introspection/dictionary system that would permit to do generic operations like declaring classes to an IO system or to an interpreter. These two use cases are particular cases of the more general problem of being able to produce automatically, for some user class, some adapter code toward a facility. An automatic machinery is definitely needed in case the "user class" consists of the full set of classes of the event model of an experiment, and in case the facility is for example a storage system for which someone really don't want to write the adapter code by hand ! In ROOT, this intropsection/dictionary is burried in CINT which had been though to be first of all an interpreter. It is clear (and I have said that for long) that we must grant the ROOT team to have "seen" the potential role of the introspection/dictionary system, especially to deal at once to the problem of the IO in a transparent way to users. Fine, huge conceptual progress. But it appears also that we know now that this technic can also help in dealing with other "facilities" in particular to deal with interpreters like Python. Clearly the ROOT team had been a little bit too dictatorial in trying to enforce at once, not only their IO, but also their choice for an interpreter (and in fact their choices for everything). We definitely know now that in an open world this can't be. People want their freedom, it's their own right and the freedom of some must stop when encounting the freedom of others (I have not invented this sentence). CINT is an interpreter but Python is now here and tomorrow we are probably going to have something even better. Then it is clear that we need to have a rethinking of the introspectio/dictionary and in particular to have it out of CINT, in order to have a more open situation. Fine, then the problematic is clear, what to do is clear too : - have some stand alone package that deal to this particular problem. We have a candidate : Reflex. Fine, progress. - use it to wrap our classes for the ROOT/IO. - then have to change ROOT in order to be able to use the IO with Reflex then without CINT. It means probably a lot of changes in ROOT due to the fact that this openess had not been though from the beginning and that the IO machinery is in quantum intrication to the rest. (In the process, if someone could get rid of the Draw() method on TClass). - have CINT changed to use Reflex. Strictly speaking, an interpreter being "on top" of things this is not a fundamental need ; but doing it would permit to minimize the memory usage and performance at run time for people wanted to use this interpreter. (I don't. CINT is anyway too crashy for me). - use Reflex to wrap for other interpreters, then for Python for the moment and why not for java. Fine, the workplan is extremly clear. But for the users, all this may look a little bit too much plumbing. In order to show that some progresses had been done, I propose then another challenge : from a Python script, demonstrate that we can write/read some data of a user class in a ROOT file by using the wanted-light-dedicated Reflex package, and then by not using CINT at any moment to do it. 2) Plugin manager : ----------------- It seems to me that the situation is going to be more tough here. The problem is that there are three pieces of software to deal with : Gaudi-DLL-loader, SEAL-Plugin-manager, ROOT-Plugin-manager. Right now the merging Gaudi-DLL-loader / SEAL-Plugin had not been achieved (which was more or less the plan if I had understood well). And for the moment all these plugin systems are around at the same time in an LHCb application (which is why we can't run on Mac). For LHCb the point is to know what to put in place of the Gaudi-DLL-loader if it has to be replaced. But in fact, do we want to replace it ? If so, then candidate code is around SEAL-Plugin, ROOT-Plugin. First there is the sociology. The SEAL-Plugin is clearly "CMS code" (as pointed out by Vincenzo Innocente at the review), and since we had not been able to have swapped two lines of code in it in order to have the port on MacOSX, I quite don't see how we can expect a merge SEAL-CMS-Plugin ROOT-Plugin. (Being blocked for monthes on MacOSX for that is a pure scandal. But well, let us proceed). The advantage of the SEAL-Plugin is that it is already a well defined, quite light, stand alone package. Fine. The SEAL-Plugin is used by POOL which means that "unification" decisions are not of pure LHCb concern. About the ROOT-Plugin, it is once more a question of packaging. Right now this machinery is embedded in the "core" which is some kind of quantum intrication of a lot of things. Contrary to the dictionnary the situation seems less mature here. Good luck. 3) Math library : --------------- a) GSL : Then it seems that people in ROOT and LCG/AA do not want to use the native GSL API. Why ? For the pure pleasure to use mathlib::bessel() instead of gsl_bessel() ? Pere at some moment mentioned that some ROOT clients can't use GPL code. Interesting. The real true reason is probably here. If so then the ROOT team wants to embarque LCG/AA and probably the whole HEP in a reinvention of the GSL API and probably a rewriting of this already open source software for this single reason. Astounding. Clearly some have no more in mind that the primary goal for us in writing software is to run HEP experiments. At some moment Pere had said that if having to choose between HEP and non HEP the choice would be obvious. Then it is time to choose. If GSL satisties our needs it is clear that we have to use it directly. We must not promote the invention of a new API and not mendate LCG/AA people to work on that. The job of having a math library had been done by others during the time that CERN and HEP had been unable in the 80-90 to leave fortran and to migrate the CERNLIBs at least on C. Too late to reinvent something from scratch now. The job had been done by others and probably in a better way. (GSL wrapping exists too for java and Python, somthing probably not in the plans of the mathlib:: reinvention project). Now if ROOT has problem with some "clients" that can't be GPL, the best for them is to take the GSL API and to offer this people a reimplementation of their own under the non-GPL and the non-CERN licence and copyright : /* Copyright (C) 1995-2050, Rene Brun and Fons Rademakers.*/ /* All rights reserved.*/ It is clear that I don't want to waste ten years of my life in that. b) Basic math function like sqrt, cos, exp, log10. This comes with C/C++. I quite don't see why we should deal with any encapsulation of that ; some SEAL::Sqrt or worst some TMath::Sqrt that embarques today, due to the "ClassDef", the whole CINT and that uses anyway behind... the C sqrt ! By passing along... I quite do not see why using some TString whilst std::string is here. Does "SEAL+ROOT->ROOT" means also that we are going to pass along "const char*" everywhere and then do all string manipulation with strcmp, etc... like in the 80s (for people that had passed to C at that time) ? In some of the revisit ROOT code, are we going to continue to have that ? c) CLHEP/LorentzVector : then about maths remain the specific HEP things. If not in GSL, (really ?) this could be our two cents of contribution to it. Did we try (hard) to do that ? Does LCG/AA has plan to try that ? If anyway some math remains in our hands, where to put it ? That's right that CLHEP has not a so good responsiveness. I have asked the DLL for Windows for long without any response for the moment and I have finished to have my CMT building for CLHEP. But before reinventing a library and a new namespace and change all our code, it could be fine to try to shake a little bit CLHEP people and try to revitalize this library by injecting LCG/AA manpower. Ofently some library "fall asleep" simply because people behind feel a little bit alone... But then, if nothing happens, or that we fall on people that don't want to move, it is clear that we shall have to put the needed code in some place of our own. POOL : ---- Again and again, POOL over ROOT, as ROOT is now, is wrong. POOL should be over clean stand alone IO and introspection packages. Clearly what is going to be done or not done around the introspection/ROOT-Core/CINT is going to be crucial. If well done and with a stand alone repacking of the ROOT core to isolate the IO machinery, we may pass from something unacceptable to something perhaps bearable. And if so, then POOL over these new layers will start to make sense at the engineering point of view. It will be a progress. Anyway, if I were the LCG/AA or POOL team, I would have a look to something called "HDF5". The rest of the world did not seem to have wait for us... GUI : --- The position is now clear : Qt. With the restriction that Bill Gates and Steve Jobs are not on Qt and will not be on Qt before long. But this is a fight which is not for us, and except a pure miracle, it will not be solved during the LHC era. (Perhaps with a new pope...) The current task with Qt is to be able to deal with the TQtCanvas and then embarque ROOT plotting in a non ROOT Qt based GUI, and in particular with things spawn from Python in a thread ; today this crashes for me on my Mac. I must point out that I have done the same job with hippodraw in a couple of hours at the last visit of Paul Kunz at CERN. It worked basically at the first shoot. Why is it always on the same code that we finish to waste our time ? Graphics : -------- I stick to the choice of Inventor/Coin3D but also to the idea to do all the graphics (2D/3D) with one library. About that point I think that the ROOT team is convinced also of that. Fine, progress. Clearly the ROOT team develops its own graphics library, which is something I gave up when Open Inventor had been put open source in 2000 by Silicon Graphics and when the Coin3D library came which was far more complete that my own SoFree package. (System In Motion delivers also the viewers for the various GUIs around). It is totally clear that at some point these two libraries (Coin3D and TReInventor) will be around. The requirements I put on the ROOT team is simple and is always the same than ever : first the packaging. I want something that presents itself like Inventor. That is to say an upper layer over OpenGL for which the viewers can be integrated in a GUI that I master, and a library which is not intricated to unrelated things (for example mathlib::bessel()). Moreover it is clear that I want to find at least what is in Inventor today. See Open Inventor books for that. Anything else would be a quantum jump in the past. Right now in TReInventor the situation is not so. There is still the distinction 2D/3D with different set of classes over two different rendering layers. All this is now untraceable ; a pain. For example today we can't have the 2D (then the plotting) over OpenGL. It seems to be in the plan to have at least that. Fine. It will be a progress. Then perhaps in ten years from now CERN will have finally reinvented Inventor and be able also to plot with a common graphics layer ; something that I can do now for more than five years with my HEPVis/SoPlotter Inventor nodekit. I must point out that at the LCG/AA review, only one man had been invited to present his views on graphics for the four experiments. And in particular by coming with pictures not done with the tool and library promoted in the experiments. How to avoid shouting for scandal ? And I have not been the only one to jump to the ceil. Some at LAL, working in ATLAS, are still up there. (In fact at LAL, we wait eagily for antigravitation, it would help to work with CERN on software). AIDA : ---- Then another touchy business. 1) Why interfaces ? ----------------- For three main reasons. The first one is technical. A pure abstract interface (and not some TVirtualSomething attached through some "ClassDef" to 200 klines of code of some knotty Core or a full interpreter) permits to have a nice decoupling between libraries and domains. A canonical example is between an histogram library and a plotter library. If the histogram implementation is using a pure abstract interface then the plotter needs to see only the histogram pure abstract interface. It is not needed to link the library plotter with the histogram library. Nice. When you have seen that, the vision that you have to organize a big software is changed. (It is the same than having "see" the point with OO versus FORTRAN or FLUKA datacard-oriented programming). The second reason is sociological. A pure abstract interface is the minimum piece of code that can be shared between two developers that disagree on everything but want anyway to be interoperable at run time. The third reason is that it permits to extract and discuss user API without embarquing considerations about the implementations. This definitely simplify the relationship users / developers. 2) In LHCb : ---------- Why LHCb is different of the other three experiments ? Because the Gaudi framework heavily uses the interfaces (which are no more pure abstract in fact) and this is GOOD. Gaudi is a manager of services that export interfaces. In particular this permit to be able to change the implementation of a service if needed without having to smash the user code. This had already permitted to pass from HBOOK storage to ROOT storage without impact on user code. Then it is totally clear that LHCb must continue to use interfaces ; continue to "extract" them and continue to improve the existing ones. And clearly people do not have to look too much to other experiments because they did not see the point. CMS is template-oriented (then passes hours to compile). ATLAS ; well is ATLAS. ALICE and the ROOT team did not see the point because you can't see the point when you have in head that nothing valuable exist elsewhere and that you want (anyway) to rewrite everything. 3) AIDA : -------- People are perhaps not aware that the AIDA::IHistogram which is now spread around the world (yes,yes) is born around LHCb/Gaudi. This concept of interfaces had been also seen by other people that grasped the idea and tried to push it for the specific purpose of generic analysis tools. And it appears to be a powerful integration vector especially in this area that needs to interoperate with a lot of domains : multiple storage, GUI, graphics, multiple scripting, etc, etc... and have to deal also with a lot of languages : C++, java, Python, C# (I am pretty sure that one day or another we are going to see someone coming with a C# implementation of the AIDA interfaces). At CERN, some had seen the point about interfaces and generic analysis tools and for the moment they had been indirectly stopped by people that still don't see or don't want to see the point. Then death of Anaphe and now probable death of LCG/PI. But, good luck, the idea escaped CERN and things are in motion in the rest of the world and I don't think that it can be stopped now. For example the idea had been adopted by a Japanese that does a pure Python implementation of the AIDA interfaces (see paida in sourceforge). What is sure is that, for LHCb, leaving the IHistogram to use directly everywhere the poor ROOT/TH (see below) will be a huge quantum jump in the past. Moreover it will slam the door to all what had been done in java at SLAC and what I have done around Inventor plotting at LAL. 4) LHCb and PI : -------------- In my view, LCG/PI made three mistakes. One was building over ROOT and the THs : * Too easy for the ROOT team or pro-ROOT people to come and say "why should I use a layer over the TH ?" * Except this sociological problem, it was wrong anyway since we know now that the TH is not complete versus the AIDA::IHistogram. The ROOT/TH lacks the handling of the number of entries per bin which is useful of its own but also permits to handle 2D slicing and projection properly. * it is 20% slower than anything done over the std::vector because the std::vector access is faster than the TArrayD. I can demonstrate that at any moment with my HCL package. * the inheritance tree of the TH is wrong (someone has for a 1D object forever dummy axes Y and Z that are forever streamed in file !) * and as usual, too much untraceable back "gIntrication" to everything. Another mistake of LGC/PI had been to introduce a "user semantic layer" (pi_aida classes) to hide the AIDA factories in some upper classes. This complicated things in the head of people ; people were unable to make the difference between pi_aida, AIDA interfaces and AIDA implementations ("AIDA is not Anaphe and Anaphe is not AIDA" syndroma). Moreover the pi_aida classes had been too tied to a specific implementation. It did not work with LAL or SLAC code ; other developers did not follow. Then it induced a break in the consistency of the AIDA offer. A "user semantic layer" is perhaps relevant but it has to be well done and though. In particular, as it is a new API, it must come in consistency with other implementations so that the AIDA team can come with this new user API in a consistent way. One last mistake had been that people at CERN ignored what at been done at LAL and SLAC. Trying to redo everything locally, with a take-it-or-leave-it-minded-ROOT project around, and without seeking, at the level of the code, strong collaboraters outside was (is !) a dead end. And an extra technical critic directly related to LHCb ; the PI implementation is based on the SEAL/Plugin which had not been merged to the Gaudi-DLL-loader. Then extra complications at the level of Gaudi. This was perhaps ok for CMS but not for us. 5) LHCb and AIDA : ---------------- The slide that Pere shown about AIDA in LHCb is probably the best attitude for the moment. Stick to the interfaces because it is in fact the Gaudi spirit ! Stay for the moment with the two implementations that works. It is clear that at this point I would suggest to have a look at what could be done with the material of LAL and SLAC, and obviously to the LAL one. It is already around in Panoramix, it is in C++, it is faster on numerous points than ROOT and it attacks the plotting with the same graphical library than Panoramix. For a more direct usage of the histogramming of another AIDA implementation, it is clear that we shall have to solve the problem of the object management. The histograms must be manageable in the Gaudi transient store but must be also manageable by the AIDA implementation. Things must be arranged so that the histogram be storable through the Gaudi/POOL channel but also through the AIDA::ITree channels of the AIDA implementation. This is a technical ridle that we would have to solve but I think that it is worth to try to solve it. It will open doors. 5) LHCb and AIDA and ROOT and LCG/AA : ------------------------------------ About ROOT, the blocking point with interfaces is clearly first of all conceptual since the ROOT team did not yet see the point. As Geant4 (that also did not see the point) they have a lot of virtual classes (TVirtualSomething in ROOT and GVSomething in Geant4) but did not yet see that they would gain a lot in clarity and quality the day they will do the quantum jump of passing to some ROOT::ISomething and Geant::ISomething. Ah, another point. For ROOT and Geant4 it could be fine to pass on day or another to namespaces in the code but also for libs. Geant4 has libG4Something and ROOT should do the same ; libROOTSomething. One day or another we are going to have nasty clashes with some library of some OS with naming like : libCore, libNew, libTree, libMatrix, libHtml, libGeom. If LCG/AA can do something... (It is incredible that we have to ask for this kind of trivial things). In fact stricly speaking, the ROOT core should be 100% pure abstract interfaces with mainly one concrete entry point to dynamic load plugins. But this is probably 50 years too early to ask for that... Right now, it is clear that there is also the problem of introspection in the way ; a TVirtaulSomething has for the moment a "ClassDef" that ties it to CINT. But perhaps with the new architecture based on Reflex we may have a way out. Plotting : -------- Plotting is a special crucial application of any grahic library because, as it is generic, it can be treated out of any experiment context. Then plotting can in principle be delegated to LCG/AA. Right now, what is in ROOT, is a meer C++ rewritting of the HPLOT code over the reinvented ROOT graphics layers. Other people around worked also on the problem of plotting by using existing graphics libraries (LAL with the SoPlotter Inventor nodekit, SLAC with the java2d library and hippodraw with direct non-3D Qt drawing layer). Clearly the overall interactive environment must be though to be open to various plotters. I hope to achieve this goal in Panoramix. Clearly here the usage of abstract interfaces is going to be important. It had been already demonstrated that the OpenScientist/Lab AIDA implementation can plot other AIDA histogram implementations. LCG/PI had been able to use the OpenScientist/Lab AIDA plotter. ROOT, as it is now, can't do this kind of things. It can plot only its own T business. A word about online : ------------------- Then PVSS is going to be on Qt. Fine. This will open doors. But I am not sure that opening a door on a TQtCanvas plotter done on some TReInventor library is the best choice. About opening door, there is then something that we may be able to do. In NA48 we had put a 3D display (done with the same software than the offline display) at the enterance of the control room. It was nice. The software was regulary picking some events in the flow and showed them. This kind of display has the best effect with an asymmetric detector and, good luck, it appears that LHCb is asymmetric. Moreover with the large flat screens that we have now it would be really nice to have that ; probably totally unusefull to control anything but nice. A word about simulation : ----------------------- I did not expressed any opinion on that at the meeting but I have moods here ; at least about FLUKA. I quite don't see why CERN signs anything with people that don't want, in 2005, to put their code open source. The technical sociology and steering of Geant4 is perhaps a pain, but at least someone have a reasonable API to work with and can jump in the code if needed. To explain the angriness of this paragrah ; right now, with a physicist of LAL we break our teeth in setting up a FLUKA program by assembling a set of strange F77 like "datacards" ; we have the impression to be back in 1970. It would be interesting to histogram (with an AIDA::IHistogram) the age of people developing that. Sometime I really really don't understand. Conclusions : ----------- Having Pere as the new head of the executive of LCG/AA is definitely a great hope for LHCb (and for me). But my first impression is anyway that his first decision had been to "let enter the wolves". But well perhaps things had really changed in the head of some. Let us see. What is sure is that now time is really running. We shall have not so much occasions to introduce major changes in the basement and break some API if needed. LHCb is small compare to others. In some sense it is a chance, it helps taking decisions. But it has the obvious drawback of the manpower. But the strategy to compensate is luminous. We have the huge chance to be after the open source revolution of the mid 1990s. Then we can compensate by picking the good open source software around that we need (a good open source software is ofently one which is born open source). Obviously this is for the basement plumbing. It is clear that the "on top Gaudi", that is to say the event and detector models and all what gravitates around them ; reconstruction, simulation, physics analysis algorithms, etc... all that is in the pure hands of LHCb physicists. This code is not going to be found in open source ; it is not going to be provided by LCG/AA and can't be provided by pure software engineers that are not paid to have a HEP detector in head. It is not the place to have some internal review of these parts (and it is not to me to do that), but I would take the occasion to point out that a good guideline here would be to avoid unecessary complexity ; on long run, complex code finishes always to be in the way, especially when written by people that "pass along", which is a common case in HEP. (I am calling that the "bright summer studient software syndroma"). It is not obvious to qualify "complexity". A good criteria is to take the eye of a new comer. If someone, normally bright, coming for example for a thesis, has to pass months to understand the intrication of the whole software, or an algorithm, or the way to describe a piece of detector, before being at work ; then something is wrong somewhere. Software must help in doing physics, it must not be in the way. This being said, the overall strategy seems clear to me ; do not waste our time in reinventing what exists for the plumbing. Concentrate on the essential ; the software for physics and take the dedicated rest elsewhere. The requirements to LCG/AA are then clear. Have them help in finding good open source software. Have them help in puting the infrastructure in order to make these softwares available for us. Have them help in following up these softwares. Have them help in offering the infrastrucure to do the integration "of everything" and for doing the deployement of it. And, at last, if really needed, have them help in developing lacking pieces in a good collaborative spirit with the home institute software engineers involved in the program. With my best regards. For this particular occasion, I am going to change a little bit my usual "eXtreme debugger" signature and sign with : Guy Barrand - Software research engineer at CNRS/IN2P3/LAL. - Executive responsible of the Panoramix application. - Someone that had not been invited as a referee of the LCG/AA internal review. - Writer, among other things, of the packages that should had been done at CERN for long : the histogram HCL package and the IO Rio package. - Responsible of the OpenScientist integration which is somewhere what LCG/AA should be, but with the full federating power and infrastructure that some engineer, based in home institute, expect from CERN. - Debugger at LAL, for more than 20 years now, of CERN software. And be sure that it is not a joke. I have discovered at the last CHEP that I was not alone to have this strong feeling. We should form some kind of club one day or another. (The "gdb cern" club. Something like that). - Someone that had been able up to now to not drown in the ROOT today engineering mangrove. Each time I fall on the ROOT logo, I can't refrain to see a poor lady trapped to the ground in a nasty way and that tries despairingly to escape that. - A software plumber. And if I had to do it again, be sure I would do theoretical physics with a blackboard and a chalk. - An active member of the AIDA team. The last hope ? - A French citizen that will have to vote in May for or against the European constitution. In fact now, I wait the output of the begin of May introspection workshop conducted by a lab that has already some kind of European constitution for long. Due to the indirect impact its going to have on my professional life, it will be a good indicator for me about what to vote... (Strictly speaking the output of the ROOT-5 workshop in September would have been better for me to take some decision ; but my government, not aware of true issues, did not want to move for me the date of the referendum).