Start a Conversation

Unsolved

This post is more than 5 years old

L

2891

June 29th, 2011 08:00

Problem reading 900Mo file (getting READ_BLOB -10209 error code)

Hello,

We have been using Centera for a while without problems, now we are trying to write and read larger file (eg 900Mo) and we get the following error :

An error in the generic stream occurred (transid='nr0u00
37/31/READ_BLOB')(-10209)

The documentation says :

-10209 : FP_STREAM_ERR : An error occurred in the generic stream. Check your
code.

The code has not been modified for more than a year and is working correctly on smaller files , does it mean that we may have encountered a size limitation somewhere ?

We are using the java SDK and the BlobRead function, searching in the forum I found that an alternative may be to use the BlobReadPartial function but I am not sure that would solve our probleme as we need to read the whole blob anyway.

Anyone has already meet this kind of problem ?

You will find the code written for the read method below :

public static ByteArrayOutputStream retrieveContent(String idCentera) throws CenteraOpersTechnicalException {

        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

        try {

            FPPool.RegisterApplication(CenteraOpersConstants.APP_NAME,CenteraOpersConstants.APP_VERSION);
            FPPool.setGlobalOption(    FPLibraryConstants.FP_OPTION_OPENSTRATEGY, FPLibraryConstants.FP_LAZY_OPEN);
            FPPool.setGlobalOption(    FPLibraryConstants.FP_OPTION_OPENSTRATEGY, FPLibraryConstants.FP_LAZY_OPEN);

            if (idCentera == null) {
                throw new CenteraOpersTechnicalException("Identifiant Centera non valide");
            }
           
            // Contact cluster to load C-Clip

            FPClip theClip = new FPClip(getThePool(), idCentera, FPLibraryConstants.FP_OPEN_FLAT);

            FPTag topTag = theClip.getTopTag();

            // check clip metadata to see if this is 'our' data format           
            if (!topTag.getTagName().equals(CenteraOpersConstants.TAG_NAME)) {
                throw new IllegalArgumentException("This clip was not written by the 'Store Content' sample.");
            }

            topTag.BlobRead(outputStream);
            outputStream.close();
            topTag.Close();
            theClip.Close();

            // Always close the Pool connection when finished.
            CenteraOperations.closePool();

        } catch (FPLibraryException e) {
            throw new CenteraOpersTechnicalException("Une erreur De la librairie de SDK Centera s'est produite : " + e.getMessage() + "(" + e.getErrorCode() + ")");
        } catch (IllegalArgumentException e) {
            throw new CenteraOpersTechnicalException("Une erreur IO est survenu pendant la lecture de l'element " + idCentera + " : " + e.getMessage());
        } catch (IOException e) {
            throw new CenteraOpersTechnicalException("Une erreur IO est survenu pendant la lecture de l'element " + idCentera + " : " + e.getMessage());
        }

        return outputStream;
    }

Regards,

Laurent

41 Posts

July 8th, 2011 11:00

Don't know if this can be the reason, but are you sure you're not bumping into some kind of memory issue? Reading a 900Mo file in a byte array might put some strain on the java heap space.

regards,

Kim

41 Posts

July 11th, 2011 08:00

It might be that 2Gb is enough to store the file once, but if you  read into a ByteArrayOutputStream, you allocate truckload of byte[]  because the buffer has to grow constantly (it will double in size every time it has to grow):

buf = Arrays.copyOf(buf, Math.max(buf.length << 1, newcount));

So it might be that there is too much garbage created, and that the SDK receives an OOM internally. The difference with a standalone program where you read and write locally is that the SDK generates some load and also uses internal buffers, which might push the JVM right over the edge.

Don't know if this is the problem, but it might not hurt either using an OutputStream that just throws away all data, just to check that this is indeed the problem, or use something like new BytearrayOutputStream(1024 * 1024 * 1024L) to avoid all the array copies and garbage (but it's a bit decadent :-)).

1 Rookie

 • 

3 Posts

July 11th, 2011 08:00

Hello Kim,

It is true that 900Mo file is big but I have specified a 2Go JVM for runtime, also I can read and write the file locally without problem.

In the scenario I have described it seems that the problem is really on the Centera side only for reading because it looks like writing works well for the same file (at least I do not have errors but I cannot check if the file has been stored without corruption because I cannot read it back...)

Regards,

Laurent

1 Rookie

 • 

3 Posts

July 25th, 2011 09:00

Hello,

I finally managed to solve my initial problem by using the BlobReadPartial method along with multiple threading. I have read somewhere in the Centera documentation that the reading on a single stream cannot last more than 60s and this is maybe what was causing the error.

To solve my problem, I used the attached sample that was posted by Graham Stuart on another thread.

Regards,

Laurent

1 Attachment

No Events found!

Top