skip to Main Content

Can anyone suggest me a way to create an xml document from a text file in java?
I have already coded some program it doesnt meet my needs.

I am trying to create a program that converts a pdf file to a ppt for that i need to convert pdf to text file and then that text file into xml to extract the features.But now i cant continue further because the xml file is not getting correctly

public class TextToXml {
    StreamResult out;
    TransformerHandler th;


    public void convrt(File f)
    {
        try{

        String fname=f.getName().replaceAll("pdf", "txt");
        FileInputStream fstream = new FileInputStream(fname); 
        DataInputStream in = new DataInputStream(fstream);
        BufferedReader br = new BufferedReader(new InputStreamReader(in));
        out = new StreamResult("djksgh.xml");
        openXml();
        String strLine;
        int cnt=0;
        char strarray[]=new char[250];
        char c;

        while ((strLine = br.readLine()) != null)   {
            for(int i=0;i<strLine.length();i++)
            {
                c=strLine.charAt(i);
                strarray[i]=c;
            }
           if( (Character.isDigit(strarray[0]))&&(strarray[1]=='.')&&(Character.isWhitespace(strarray[2]))&&(Character.isLetter(strarray[3])))
            {
                processhead(strLine);

            }
            else if((Character.isDigit(strarray[0]))&&(strarray[1]=='.')&&(Character.isDigit(strarray[2]))&&(strarray[3]=='.')&&(Character.isWhitespace(strarray[4]))&&(Character.isLetter(strarray[5])))
            {
               processShead(strLine); 
            }
            else if((Character.isDigit(strarray[0]))&&(strarray[1]=='.')&&(Character.isDigit(strarray[2]))&&(strarray[3]=='.')&&(Character.isDigit(strarray[4]))&&(Character.isWhitespace(strarray[5]))&&(Character.isLetter(strarray[6])))
            {
                processSS(strLine);
            }
            else 
            {
                process(strLine);
            }
        }    

        in.close();
        closeXml();
        }
          catch (Exception e)
        {
          System.err.println("Error: " + e.getMessage());
        }

    }

    public void openXml() throws ParserConfigurationException, TransformerConfigurationException, SAXException {
        SAXTransformerFactory tf = (SAXTransformerFactory) SAXTransformerFactory.newInstance();
        th = tf.newTransformerHandler();

        // pretty XML output
        Transformer serializer = th.getTransformer();
        serializer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4");
        serializer.setOutputProperty(OutputKeys.INDENT, "yes");

        th.setResult(out);
        th.startDocument();
        th.startElement(null, null, "MyXml", null);
    }
    public static boolean isupper(String str) {
        for(int i=0;i<str.length();i++)
        {
            char c=str.charAt(i);
            if(c>=97&&c<=122)
            {
                return false;
            }
        }
        return true;
    }


    public void process(String s) throws SAXException {

        th.startElement(null, null, "Sentence", null);
        th.characters(s.toCharArray(), 0, s.length());
        th.endElement(null, null, "Sentence");
    }

    public void processhead(String s) throws SAXException {
        th.startElement(null, null, "Section", null);
        th.characters(s.toCharArray(), 0, s.length());
        th.endElement(null, null, "Section");
    }
    public void processShead(String s) throws SAXException {
            th.startElement(null, null, "SubSection", null);
            th.characters(s.toCharArray(), 0, s.length());
            th.endElement(null, null, "SubSection");
        }
    public void processSS(String s) throws SAXException {
        th.startElement(null, null, "SubSubSection", null);
        th.characters(s.toCharArray(), 0, s.length());
        th.endElement(null, null, "SubSubSection");
    }


    public void closeXml() throws SAXException {
        th.endElement(null, null, "MyXml");
        th.endDocument();
    }
}

Text file :

Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 749429, 11 pages
http://dx.doi.org/10.1155/2013/749429
Research Article
Image Matching Using Dimensionally Reduced Embedded
Earth Mover’s Distance
Fereshteh Nayyeri and Mohammad Faidzul Nasrudin
Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia,
43600 UKM Bangi, Selangor, Malaysia
Correspondence should be addressed to Fereshteh Nayyeri; [email protected]
Received 3 July 2013; Accepted 31 October 2013
Academic Editor: Feng Gao
Copyright © 2013 F. Nayyeri and M. F. Nasrudin. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Finding similar images to a given query image can be computed by different distancemeasures. One of the general distancemeasures
is the Earth Mover’s Distance (EMD). Although EMD has proven its ability to retrieve similar images in an average precision of
around 95%, high execution time is itsmajor drawback. Embedding EMD into L
1. Introduction
One of the interesting problems in database communities is
imageretrievalfromlargedatabases.Thefundamentalissueis
howtodesignasimilaritymeasureinamannerthatshowsthe
concept of similarity between two images, because choosing
a proper measure has considerable effects on image retrieval
applications. Some of the similarity measures include the
Earth Mover’s Distance (EMD), Jeffrey’s divergence, and
Minkowski-form distance [1].
The EMD is a general and flexible metric that has desir-
able and striking properties for content-based image retrieval
[2, 3]. Another method, called embedded EMD to L
1
,was
proposed to solve the EMD problem. This method maps the
imagematrixtoanL
1
norm; therefore, instead of comparing
2-dimensional matrixes, we can compare 1-dimensional vec-
tors. Although this idea is less time consuming, it produces
distortion. Sometimes, an exact computation may be practi-
cally infeasible; in this situation, an approximation solution
is helpful to find the exact result with some distortion. Both
execution time and performance are important factors in
imageretrieval,andweshouldattempttoreducedistortion
as much as possible. In this paper, we propose two methods
toimprovetheperformanceofembeddedEMD.Thefirst
method, sampling, reduces the time but decreases perfor-
mance. In the next proposed method, sketching, we improve
performance by sacrificing the time of execution. Finally,
in the last method, by solving the problem of sampling, we
improvetheperformancewhilereducingtheexecutiontime.
2 Journal of Applied Mathematics
Table 1: Relationship between image’s size and array’s length.
𝐺
1
𝐺
2
2
𝑛
2/4 𝑛2/16 𝑛2/64 𝑛2/256
Number of elements in array 256 64 16 4 1
Side length 1 2 4 8 16
The remainder of this paper is organized as follows.
In Section 2 we discuss related previous work. In Section 3
we describe our proposed technique. Section 4 provides the
details of our proposed methods. Finally, we discuss the
results and our conclusion in Sections 5 and 6.
2. Previous Work
The concept of Earth Mover Distance (EMD) was first
explored in [4] to measure perceptual shape similarity. The
use of EMD for computing similarity between images was
later proposed in [5].Some authors in [2]havecomparedtheEMDwithother
similarity measures and evaluated the retrieval performance
of each. The results of the comparisons demonstrate that the
EMDismorerobustthanothermeasuresforthepurposeof
image retrieval because itmatches similarity better than other
distances.
1
has been developed [12]; although the empir-
icalresultsshowthatthisdistortionismuchsmallerthan
what had been estimated previously, the embedding steps
themselves decrease the complexity of computing similarity
between two images. Other authors [9]havereportedon
the complexities of querying the time and space of an exact
EMD versus an embedded EMD for shape similarity. In
this work, we demonstrate how to reduce the complexity of
the computing correspondence between two images that are
mapped to an L
1
norm by dimension reduction.
The most similar work in this area is that of Grauman
and Darrell [9], who show a contourmatching algorithm that
quickly quantifies the minimum weight matching between
sets of descriptive local features using the embedding of
the Earth Mover’s Distance (EMD) into a normed space.
Their method achieves an increase in speed of four orders
of magnitude over the exact method at the cost of only a 4%
reduction in accuracy.
3. Dimension Reductions in 𝐿
1
In modern image retrieval applications, the data is sometimes
not only very large relative to the physical memory or even
to the disk, but also highly sparse. Accordingly, computing
the embedded L
1
onlarge-scalesparsedatacanbechal-
lenging and time consuming. Various projection methods
have been suggested for speeding up these computations.
Dimension reduction in the L
Samplingmethods becomemore important with increasingly
large collections [14] because we can use the same set of
random samples to estimate any L
1
pairwise distances [15],
whereas measuring exact pairwise distances is often too
time consuming or sometimes infeasible;In general, a sketching
algorithm outperforms random sampling, although random
sampling is much more flexible [15]. In the sketchingmethod,
after scanning the data, we compute specific summary statis-
tics,andthenrepeatthisstepk times.
3.1. Procedures of Sampling and Sketching. Suppose we have
adatabaseofn images and we want to compare a particular
image with this database. To do so, we need a measurement;
this is when we use EMD. Consider that we have 2 images
with high similarity, for example, in Figures 1 and 2 apples
with spots in different positions.
In this situation, the EMD of two spots in these images is
computed as follows.
Euclidean distance between
(i) 1st pixels:√(8 − 9)2 +(12−8)2 = √15,
(ii) 2nd pixels: √(8 − 9)2 +(13−9)2 = √15,
(iii) 3rd pixels:√(9 − 10)2 +(12−8)2 = √15,
(iv) 4th pixels:√(9 − 10)2 +(13−9)2 = √15.
Therefore, the EMD of two spots is√15+√15+√15+√15 =
4
√
15 = 15.5.
In the EMD metric, Euclidean distances between all
weighted point sets are computed and then the minimum
distancebetweeneachpairofpointsetscanbefound.There
are different methods to solve this type of weighted matching
problems;inourcaseweusethe“Hungarian”method[16–
19].Thismethodfindstheminimumdistancesbetween
each pair of points in two images with n points in 𝑂(𝑛3)
arithmetic operations; therefore, the typical EMD is very time
consuming, which is the biggest drawback for EMD. Another
drawback is that when two weighted point sets have unequal
total weights, EMD is not an appropriate metric; however,
Journal of Applied Mathematics 3
Weformallyshowhowtoconstructan
embedded EMD into L
1
.Aboundaryof√log 𝑛 on any
L
1
embedding distortion has been defined [20], where n is the
number of pixels in the width or height of image (width and
height of image are equal). We embed the minimum weight
matching of contour features into L
1
via the EMD embedding
of [12, 21]. To embed EMD into L
1
,weputbitmapimageina
grid whose size is twice bigger than that of the original image
andshiftgridrandomlyupontheimage.Afterwards,wemap
pixels of the new image (which are all 0 or 1) to elements of
an array in a special orientation starting from the first pixel
in the left-top bit of the image to its last pixel in the right-
bottom bit. The rest of the array should be set after some
computation. For example, in the embedding of a 16 × 16
image, G
1
isthefirstgridanditincludes256elements,each
of which has a side length equal to 1. The first 256 elements
of the array are set with these elements. In the next step, we
add each of the 4 neighbouring elements in G
1
and place the
4 Journal of Applied Mathematics
Journal of Applied Mathematics
function Calculate Sampling(image 1, image 2)
begin
Initialize 𝐿
1
vector 1
Initialize 𝐿
1
vector 2
Initialize sampling 𝐿
1
Vec tor 1
Initialize sampling 𝐿
1
Vec tor 2
Initialize sampling EMD
Set 𝐿
1
vector 1 to Calculate 𝐿
1
(image 1)
Set 𝐿
1
vector 2 to Calculate 𝐿
1
(image 2)
Select 10% indexes of 𝐿
1
vector 1 randomly
Put the elements of selected indexes of 𝐿
1
vector 1 into sampling 𝐿
1
Vec tor 1
Put the elements of selected indexes of 𝐿
1
vector 2 into sampling 𝐿
1
Vec tor 2
Subtract each pair of corresponding elements in sampling 𝐿
1
Vec tor 1 and s ampl ing 𝐿
1
Vec tor 2
Add all subtractions into sampling EMDanddisplayit
end
Pseudocode 2
function Calculate Sketching (image 1, image 1)
begin
Initialize 𝐿
1
vector 1
Initialize 𝐿
1
vector 2
Initialize Sketching Vector Length as 10% of 𝐿
1
vector 1 length
Initilize Sketching Mtrx as 𝐿
1
vector 1 length 𝑥 Sketching Vector Length randomly
// Sketching Mtrx is as in Figure 6
Initialize sketching Vec tor 1
Initialize sketching Vec tor 2
Initialize sketching EMD
Set 𝐿
1
vector 1 to Calculate 𝐿
1
(image 1)
Set 𝐿
1
vector 2 to Calculate 𝐿
1
(image 2)
for 𝑖=1to Sketching Vector Length
Multiply each pair of corresponding elements in row 𝑖 of Sketching Mtrx and 𝐿
1
vector 1
Put the sum of multiplications in sketching Vec tor 1
Multiply each pair of corresponding elements in row 𝑖 of Sketching Mtrx and 𝐿
1
vector 2
Put the sum of multiplications in sketching Vec tor 2
end for
Subtract each pair of corresponding elements in sketching Vector 1 and sketching Vec tor 2
Add all subtractions into sketching EMDanddisplayit
end
Pseudocode 3
to reduce the complexity of EMD to O(n) by using dimension
reduction in the L
1
, sampling, and sketching. Concept of
the dimension reduction technique from n to predetermined
N-dimensional space is based on linear transformation, for
example, elements of transformation 2-dimensional matrixA
to a 1-dimensional vector [22].
Sampling is an option for dimension reduction in any
norm (e.g., L
1
or L
2
). In fact, using this technique, distances
in L
1
or L
2
from random samples can be estimated by a simple
scaling [13, 22].Althoughitisasimpleandpopularmethodto
approximate distances, it does not guarantee accuracy. In this
method,asitisshowninFigure 5,werandomlypickk (out of
D) columns from the image matrixA and image matrix B.We
subtract them and set the results as a corresponding element
inthesamplevector.Finally,wesumalloftheelementsof
the sample vector and call the result the sampling EMD of
two images A and B.
In order to get the best or, at least, near to the EEMD
method, we tested different sampling rates, for example 5%,
20%,30%,andabovethewholevector.Finally,wefoundthat
10% is the best sampling rate. Therefore, we randomly select
10% of elements from L
1
vector that will generate just 546
elements.
Sampling EMD is displayed in Pseudocode 2.
Sketching is another option for dimension reduction. In
this method, after scanning the data, we multiply the original
data of image matrix A and image matrix B by a random
matrix R which has either a 0 or 1 for each element, and the
subtraction of the resulting matrices forms one element of
the sketch vector. We repeat this step k times. The sum of all
Journal of Applied Mathematics 7
function Calculate DREAT(image 1, image 2)
begin
Initialize 𝐿
1
vector 1
Initialize 𝐿
1
vector 2
Initialize index Vector
Initialize DREAT Vec tor 1
Initialize DREAT Vec tor 2
Initialize DREAT EMD
Set 𝐿
1
vector 1 to Calculate 𝐿
1
(image 1)
Set 𝐿
1
vector 2 to Calculate 𝐿
1
(image 2)
For 𝑖=1to 7
Select 10% indexes of 𝐺
𝑖
randomly
Put the selected indexes in index Vector
end for
Select all elements of 𝐿
1
vector 1 whose indexes are in index Vector
Put the elements in DREAT Vec tor 1
Select all elements of 𝐿
1
vector 2 whose indexes are in index Vector
Put the elements in DREAT Vec tor 2
Subtract each pair of corresponding elements in DREAT Vec tor 1 and DREAT Vec tor 2
Add all subtractions into DREAT EMDanddisplayit
end
Pseudocode 4
12 13
8
9
98
9
10
A B
Figure 1: Two figures with high similarity.
(a) Input images (b) EMD flow
Figure 2: Computation of dissimilarity between two input images in Euclidean space and their corresponding EMD flow.
8 Journal of Applied Mathematics
00
0
00
00
0 0
0 0
000
17
16
63
G1 G2 G3 G4 G5
Figure 3: Mapping of a 16 × 16 image into a vector.
Table 6: Example of average precision calculation.
𝑛 Doc no. Relevance Precision points
158 Ys 𝑃 =1/1=1
259 No
3576 Yes 𝑃 =2/3=0.667
4590 No
5986 Yes 𝑃 =3/5=0.6
6592 No
(A, B).
elements of the sketch vector is what we call the sketching
EMD. Sketching method is illustrated in Figure 6.
Pseudocode 3 shows sketching method.
3.2. Procedures of DREAT. Basedonthesamplingandsketch-
ing experiments, the images’ L
1
vectors are heavily tailed
where there aremany zero elements in former grids andmany
nonzero elements in latter grids. In the sampling method,
we choose samples of the 𝐿
1
vector and apply EMD to the
samples instead of the whole vector; therefore, the execution
time is reduced. However, the problem is that all elements of
vector are sampled at the same rate.
When we go through the vector, most data in the initial
sections, such as G
1
and G
2
, contain almost all zeros when
compared with the latter sections, such asG
3
,G
4
,andG
5
.We
considered this fact to be a heavy-tailed vector. As a result,
when we apply the sampling method, the vector might by
chance contain almost all zeros, which is meaningless. That
is the reason why we need to create a method that will select
an equal portion of samples from each part of the grid instead
of randomly sampling from the whole.
WecalledtheproposedmethodastheDimensionReduc-
tion inEmbedding byAdjustment in Tail (DREAT), amethod
that hybrids both the sampling and sketching. For example,
supposewewanttoselect10%ofavectorasasample
vector. In the original sampling methodwe randomly selected
4. Experiments
In this work, we tested 5 methods: exact EMD, embedded
EMD, sampling, sketching, and DREAT. Our image dataset
includes bitmap images from Amirkabir University of Iran
[23].
MAP
Percentage of first correct recognition
1st pos. 2nd pos. 3rd pos. 4th pos. 5th pos. 6th pos. >6th pos.
Exact EMD 0.97 0.99 0.01 — — — — —
Embedded EMD 0.85 0.90 0.01 0.02 0.03 0.01 0.01 0.02
Sampling 0.59 0.58 0.09 0.02 0.05 0.03 0.01 0.23
Sketching 0.87 0.89 0.04 0.02 0.01 — 0.01 0.03
DREAT 0.91 0.91 0.06 0.02 — — — 0.01
Vec tor A
Vec tor B
We did not use the letter images because Persian letters
are very similar in handwritten shape even for a human
reader. As some samples are shown in Table 4 , letters that
have two or three dots are similar to other handwritten letters
with a dot. For example in the first sample of Table 4,letter
“Cha” is very similar to letter “Ja” because the dots of letter
“Cha” stick to each other and they look like one dot. Similarly,
letter “Zha” is similar to letter “Za” in some cases. In this case,
thesimilaritymeasurementwillproduceahighdistortion.
Sincethatisnotafocusofthiswork,weexcludedallthe
letters.
We divided our dataset into two parts: reference images
and test images. The reference set includes 100 images that
we randomly selected from the dataset and tested them on
the rest of the dataset. In other terms, we remove these 100
reference images from test images’ part. So, we will only
find similar images to these reference images not exact ones.
For each reference image, we applied the 5 methods and
calculated EMD.
5. Results
Wecomputedthemeanaverageprecision(MAP)valuesfor
the results of 5 differentmethods applied to 100 query images.
Average precision (AP) is the average of the precision values
at the points at which each relevant document is retrieved.
Precision is defined as
Precision =
number of relevant documents retrieved
total number of documents retrieved
The results of all of our experiments are presented in
6. Conclusion
DREAT is a method that hybrids both the sampling and
sketching. In this paper, it shows its usefulness in dimension
reduction of sparse and heavy-tailed data. As can be seen in
the results, the exact EMD has aMAP value of 0.97; theMAP
value is the average of relevant retrieved images among the
10 top-ranked images of 100 images. Although this method
is excellent for measuring image similarity, its execution time
is very high. By using embedded EMD, a MAP value of 0.85
can be achieved in half the time of exact EMD. Our first
proposed method, sampling, reduces the time of execution,
butitachievesthepoorestMAPvalueof0.59.
References
[1] J. K. K. Samuel, Lower Bounds for Embedding the Earth Mover
Distance Metric into Normed Spaces,MassachusettsInstituteof
Technology, Cambridge, Mass, USA, 2005.
[2] Y. Rubner, C. Tomasi, and L. J. Guibas, “Earth mover’s distance
as a metric for image retrieval,” International Journal of Com-
puter Vision,vol.40,no.2,pp.99–121,2000.
[3] J. Xu, Z. Zhang, A. K. H. Tung, and G.Yu, “Efficient and effective
similarity search over probabilistic data based on Earth mover’s
distance,” The VLDB Journal,vol.21,no.4,pp.535–559,2012.
[4] S. Peleg, M. Werman, and H. Rom, “Unified approach to the
change of resolution: space and gray-level,” IEEE Transactions
on Pattern Analy

2

Answers


  1. Normally I create a Java Bean with all the information and then using JaxB or other tecnology I convert it to XML.

    Login or Signup to reply.
  2. An XML file essentially is a structured information. The question is does your converted text file have any specific structure or only text and punctuation. To convert to XML, the text at least should have some structure that can be recast into XML. So from your text given above, the question is, is that content of an element, if so which element. If not that then is your single words content of an element/attribute, if so which one. Now you will see, that the above text cannot really be converted to an XML in any meaningful way.

    However, if your text has some structure e.g. HEADING:Can anyone suggest or QUESTION:me a way to create an xml document from a text file in java?, then it can be converted to xml after parsing the text file. Suggestion is to put some meta information into the text file.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search