We explore the labeled statements of arXiv, as marked up by the authors, and extract a dataset for supervised training.
We explore the sectioning headings of arXiv, collecting all “standard” titles, as deposited by the authors.
Comparing pre-trained Glove models with domain-specific models, and factoring in data volume magnitudes, poses a question about generalization
A select palette of power authoring features, brought to you
by Authorea and LaTeXML.
How does one write a scientific document for the web? Can one use LaTeX? Should we?