Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Talk:EclipseLink/Development/Incubator/Extensions/DatabasePlatformPromotion

Thank you for putting this together.

Having been working on the Symfoware platform incubation I wondered about the following points.

Functionality Check List

About the feature list, how about splitting them up in categories: features mandatory/optional by JPA spec, and EclipseLink specific functionality?

As EclipseLink is used in Java EE application servers I think users might want to know which restrictions are relevant when running Java EE compliant applications.

About "Default runs of Core SRG and LRG", as SRG is mandatory to run and LRG is not, their test status could be different. So maybe better to put them on separate lines?

BTW, the current JPA test wiki page doesn't explain how to run the SRG test set. The ant file has a 'test-srg' target but found no reference to it on the wiki page (EclipseLink/Test/JPA).

Are there any rules about who executes the final test run to confirm a DatabasePlatform passes the basic test sets, or the LRG's for certification? Still the contributor or someone from the EclipseLink team? I'd suggest the contributors do that and send their results to the EclipseLink team (maybe to be published online on a platfom testing page with details like date, used version, JDK, OS, etc.?).

Due to limitations on the availability of databases, expertise and QA-resource, for the most part the contributer will have to arrange for the execution of the tests. I think that publishing the results of tests at certain intervals is a good idea. Perhaps we can put these results on a wiki page and link to it from the class header. Lets fine tune what goes on that page as we publish the SymfowarePlatform - tware

Maintenance

What is the procedure after a DatabasePlatform has been included in EclipseLink regarding continued regression testing and maintenance?
I suppose some test sets need to be run at certain stages:

  • RT after changes in core code that could affect platforms.
  • RT before releases.
  • RT when new releases of the DB come out.

Also, will the EclipseLink developers update contributed platform classes if they make changes in core code that affect the platform classes, and if so, are the contributors required to review them and run tests as RT again?
Is it possible to provide the EclipseLink team with the database product so that the tests can be included in regular automated test runs? (so contributors can be notified of issues soon after they are introduced?

The latter would be ideal, however, I fear, this will not always be possible. As long as the original contributor
is able to actively maintain the platform, things are just fine. Yet, once that is no more the case it becomes a
challenge to maintain the platform or even to figure out that it is no longer maintained.
One might argue that's just the community way, and the first one who wants to use the platform and finds out it's
become broken ought to fix it. However, it is not very encouraging for mere consumers to download an artefact and
then discover its "archeological" nature. 

As mentioned above, I do not think this process can automatically include an option that has the database product provided to the eclipselink team. There are IP issues with putting databases on the servers at the Eclipse foundation and resourcing issues with adding to our QA team burden. I am interested in suggestions about how best to deal with testing. So far the best I can come up with is to have contributers publish test results including versions run against on a wiki page. The EclipseLink committer-team can provide suggestions about when tests should be run, but it will be impossible to enforce those suggestions given that contributers have different constraints on their time than the committer-team. More about "archeological nature" below in comments about end of life. - tware

Support

The word "supported" is used in several places. A supported function means to me a function works as expected (to the best knowledge of the contributor). But what could "supported by: FooCompany" mean to people? What is expected from Fujitsu if I include that in my DatabasePlatform?
I can (and am eager to) help anyone trying to run EclipseLink with Symfoware, or anyone trying to extend/improve the platform class or investigate test failures that arise after my successful runs due to introduced changes, etc. But my company won't provide 7 days a week, 24 hour support. I think that the use of that word in the template class comment needs clarification.

End Of Life

There might come a time that you'd want to prune database platforms, either because they are for a DB or DB version that is no longer available, or there is no longer anyone in the EclipseLink community (incl. the original contributor) who can maintain it. You might want to include requirements that need to remain satisfied for a contributed platform not to be pruned in a next version.
A contributor also might want a platform class for a particular database version dropped to reduce RT load and focus on the current releases of the database product.

End of life is a difficult question to address. Backwards compatibility has been a major concern to the EclipseLink team since long before the code was donated to Eclipse. In the past the way we have dealt with this issue is to prune things very slowly, and instead try to be very dilligent about publishing what we have tested with each release. Based on this philosophy, the strategy of maintaining documentation about what versions tests have been run against and pointing customers at that documentation works best for us. I am, however interested to know if there are alternate suggestions that allow us to maintain backward compatibility for reasonable numbers of releases. -tware

Comments to DatabasePlatformPromotion

Thank you very much for addressing this topic. Please allow for some thoughts on it.


Basic Test Suites

The proposal suggests some basic test suites that must run without failure on a database platform in order for it to qualify for EclipseLink. However, it also states, a failure may be avoided by assessing the failure and determining that a test cannot be made to pass because of limitations on the platform and altering the test not to run for that platform.

While it is certainly true that there will always be tests that run on one database platform and will fail on another, I sincerely believe, to avoid the process of platform promotion to become meaningless we have to define a core set of tests that simply must succeed on all database platforms to be included in EclipseLink. Elsewise, to the utmost extreme, I could provide a database platform where all the tests of the basic test suites are documented to fail.

The core set of tests that need to succeed should probably be based on the load and store statements that are created and executed by EclipseLink (in contrast to JPQL queries). For this purpose, a minimal set of EclipseLink features has to be defined such that one may want to say EclipseLink runs on a database platforms iff EclipseLink configured to that set of features runs successfully on that platform. Then, from the load and store statements used by EclipseLink when configured to those features the set of mandatory core tests may be deduced.

  • The problem I see with 'mandatory tests' is that you assume the database supports the way EclipseLink is driving it. The database might support the feature just fine, in a different way, but the contributor does not have the experience/resources to update core EclipseLink code to implement this for this platform, or the EclipseLink developers cannot accept such a major change for just one DB because of the time it would take to review and test that it does not break things for other DBs. If one test in the 'mandatory' list is one of these, should the contribution be refused? Even though its users might be fine with this restriction, or can easily work around it in their applications?
To address the above issue is, I believe, the art of defining the minimal set of features properly. Yet, not to let this discussion become too abstract, could you perhaps
give an example of what you aim at with "work around" in this context, such that we possibly may clarify our views based on that example ?
  • Okay. With the implementation of the Symfoware platform I ran into an issue where with some tests EclipseLink's generated SQL uses the ANSI INNER JOIN syntax. Symfoware only supports the 'old' syntax. I was told that changing EclipseLink to generate the 'old' syntax for Symfoware would take an experienced EclipseLink developer a fairly large amount of time to correctly implement it (I gave up after half a day). The work-around for users who run into this issue would be to override the query in orm.xml with the native equivalent using the old JOIN. If this occurred with a mandatory test, my contribution would have been refused?

I don't know how to define the proper minimal set of features that will not prevent future contributions. I agree we could have some guideline about what features need to be supported until a contribution can be deemed complete enough to be useful for other users, but I don't think any particular function should be mandatory, each should be negotiable and the whole set of supported functionality should be the base for the decision to include a platform or not.

I agree that the ideal situation is that there is a set of tests that must run on all databases. In my experience, however, based on the number of databases EclipseLink runs on and a diverse set of limitations among those databases, that test set would likely be quite small. If the minimum test functionality is too small, you do not get much promise of reliability from just passing the tests. I lean towards controlling this in a community-based way. No one in the community wants a database platform that has not implemented enough to be useful. Since all discussions about these platforms are held in the open (either on wiki or the mailing lists), community members have the chance to raise objections about functionality that will be excluded. Additionally, in order to appear in the product, at some point a committer must approve the code and check it in. It is a committer's responsibility to ensure that tests in the SRGs that are not explicitly passing have been excluded for a good reason (they can be much less strict with the LRG tests) I would like to think that those two items provide enough checks and balances to ensure that any new platform is both usable and properly documents its limitations. - tware


What is a Database Platform ?

As can be seen from the already existing database platforms for EclipseLink, a database platform may not only be defined by the vendor and/or product name of a DBMS, but may also depend on the database software's major release. However, the functionality and behaviour of a database platform with respect to EclipseLink test suites may also be influenced by minor releases or even applied patchsets of the respective database software. Even worse, test results may also depend on the jdbc driver in use, both on its vendor and its release.

So, if we want to avoid database platform mushrooming, from my pov, not only minor release of db software as well as the jdbc driver used along with its release need to be documented with the expected result, but also some descisions have to be made :

Would we want to say that only one database platform may be contributed per major release of a DBMS ?

If so, who will feel responsible to watch out for new minor releases or important patchsets and observe the test behaviour for them ?

If test results change, will the change simply be documented without further reflection, or, when indicated, are bug reports to be filed to the respective database vendor ?

  • Or filed to EclipseLink, assuming the vendor is aware of the incompatibility and is telling its users to use a different way, in which case the platform class will need to be updated.

Is the database platform to be marked as pending for the minor release/ patchset in the latter case ?

Do we need a vendor contact that to some extent feels responsible for that process or do we trust that would work out as an unsteered community thing ?

  • I think such a requirement might prevent contributions from the community. Where does someone from the community get such a vendor contact? Would the vendor even listen if that community member is not paying for support?
Well, I am aware that establishing such a vendor contact is, indeed, a difficult challenge. What I actually wanted to point attention to with my question,
is basically two things :
1. We have database platforms where the vendor, to some extent, is involved in the EclipseLink community (e.g. Oracle) and such, where the vendor is not. This might
result in different procedures and different handling of the platform's maintenance.
2. While it takes considerable effort to build and contribute a database platform the major challenge is to keep up its maintenance. Elsewise an included platform
might become useless surprisingly soon. That's why I think rules for platform maintenance ought to be considered carefully.

If we leave the maintenance of a database platform completely uncontrolled, would we then alternatively want to establish a rule that, let's say, a platform is to be pruned if its contributed documentation falls behind the vendor's most current minor release by more than whatsoever ?

  • I believe Oracle has multiple platforms to support added functionality. The others seem to have only one. I doubt platforms need updates with each minor version, even with major ones. Should it be pruned even though it's likely still to work as-is? I was thinking platforms could be considered for pruning after the tested configuration was on a DB major version that is so old there are no users or it is not supported by the vendor anymore. Or, to look at it from the other side, a platform is safe from pruning as long as it is clearly still in use and working condition (as seen from bug reports/discussions in the mailing lists), with no regular reports from users that it does not work with the latest version of the DB. Then, part of the pruning process would be a (long) stage (of one minor/major version of EclipseLink, or of the actual DB) where the platform is a 'candidate for pruning' in which we ask the community if someone would certify the platform again on the latest DB release and update the documentation/platform class.
Actually, I should have been more precise with what I was stating above : By "falls behind" I meant that the latest db version the tests run on with the
results expected is whatsoever releases behind, and on newer releases at least one test fails that did succeed before. However, it may also be an issue
if it is simply unknown what results the test may produce on the current db version because no-one in the community has checked it. I am fully d'accord
with putting an outdated (unmaintained) platform on a "red list of endangered species", though, and thus calling for the community to save the enlisted
platform from extinction.

Is a database platform to be documented for multiple jdbc drivers (if avail) or is a specific one to be picked out ? How is that one chosen ?

  • Good one, yes. Wouldn't it be up to the contributor to pick (I suppose we could ask the reason to be documented if there are multiple), where other contributors could add support/testing for other drivers?
Maybe a (short) phase of discussion where interested parties of the community could utter their concerns might be helpful. 
  • Yes, but again, I think it's up to the contributor. I assume a contributor is creating and contributing a platform because (s)he need it for something. So it will be tested with the driver that was required at the time. The contributor could ask the community what driver's support is most in demand, but shouldn't that be at his/her discretion?
With respect to multiple drivers things may become tricky if different jdbc drivers achieve different test results or
even require deviating behaviour of the database platform instance.
  • Would it be? Oracle seems to have different platform classes depending on the DB version. I assume it gets the version info from the driver's metadata. It also contains the driver's details too, so it can be queried in a similar way and the platform class can change its behaviour accordingly.


There are certainly some areas of the DatabasePlatform design that cause confusion. i.e. Does a DatabasePlatform represent a specific version of a Database, a specific JDBC driver, or some combination of those things. This can certainly cause some confusion. - tware

Having said that, the number of times this confusion has caused actual customer issues is surprisingly small. In general, because database vendors have an extremely strong interest in backward compatibility, we rarely have an issue with new drivers and database versions breaking our existing platform functionality. In fact, if you look at the OraclePlatform hierarchy you'll see that the OraclePlatform subclasses, for the most part, do not override behavior, they simply add features that are available in the newer versions. - tware

The main place we have seen problems is when a 3rd party releases a driver for a specific database (and not when the vendor of a database releases a new version.) - tware

I believe the restriction we should put on the addition of new subclasses of a DatabasePlatform for a specific database, is that there has to be a behavior change that cannot reasonably be supported in a single class. - tware

I do agree that it should be very explicit which platforms should be expected to work and what versions of the database they should be expected to work with. I think this is partly addressed by publishing test results and the versions of the database that those tests ran against. I also like the idea of a "red list" as suggested above. Each major release, we could review the testing that has been performed on each DatabasePlatform, and for Platforms that have not been properly tested for that major release, we could add them to that list. If a DatabasePlatform is on that list for a couple of releases, we could deprecate it and an appropriate amount of time after deprecation, if tests are not run again, we could prune it - after giving the community proper notice. - tware

Technical Requirements

In order to make test results reproducible and comparable it would probably also be necessary to describe the configuration of the DBMS required (such as installed character sets, settings of system parameters and so forth) to achieve the documented test results. This may also include specific client side parameters that need to be set (e.g. as part of the connection url) in order to appropriately connect to the database.

Additional Comments

Some of these overlap with comments above, but noting them here anyway to show that they are shared by several people.

1- How to start, the links to procedures, downloads, how to setup the environment.
I think that a link in the Incubator page to where this information can be found would help.

2- When a test fails, you suggest that the test itself may be modified or bypassed.
Should we have any limit to how far the test can be changed?
If the test procedure was changed and the execution succeeds, will the status receive "passed" or "passed with limitations"

There are two kinds of test modification allowed. 1. Modification to avoid running the test on the platform in question. 2. Modification that does not affect the goal of the test. (an example of this is changing the way the test cleans up after itself). In the first case, this indicates a limitation. In the second case, it indicates a pss.

3- Is there any concern about performance?

Performance is very important. In general, it should be fairly straight forward to maintain performance when just making changes to DatabasePlatforms. The majority of functionality that could affect performance will occur outside of the actual DatabasePlatform code. The key here, is that when submitted code is reviewed that performance be kepth in mind and things like extra round trips to the database be avoided when possible. We do performance testing on a regular basis, and the committer group could likely provide some advice to anyone that wanted to run performance testing on a Databae Platform that is not in the group we test. -tware

4- Will someone else check besides the incubator, check the test results?
Is that necessary?

In many cases, it will be difficult for anyone by the incubator to actually run the tests. There is, however, not much benefit to misrepresenting test results because it that idea conflicts with the notion of wanting to contribute a working DatabasePlatform. I think this is an area where we have to trust our community members and allow the community to police them. - tware

5- When the test suite should be run again?
For major releases, for instance?
Who will be responsible for that?
If a test suite is not executed until a dead line, should this platform be removed from Eclipselink?

I think that ideally, we should have a test run for, at least, each major release and flag any DatabasePlatforms that do not meet that goal. We cannot always expect community members to be working on the same schedule as us, so there will have to be some flexibility there. My thought is that publishing a list of platforms that are not well tested is a good start towards policing this. -tware

Back to the top