Metadata-Version: 2.1
Name: urlextract
Version: 0.14.0
Summary: Collects and extracts URLs from given text.
Home-page: https://github.com/lipoja/URLExtract
Author: Jan Lipovský
Author-email: janlipovsky@gmail.com
License: MIT
Project-URL: Documentation, https://urlextract.readthedocs.io/en/latest/
Project-URL: Source Code, https://github.com/lipoja/URLExtract
Description: URLExtract
        ----------
        
        URLExtract is python class for collecting (extracting) URLs from given
        text based on locating TLD.
        
        .. image:: https://img.shields.io/travis/lipoja/URLExtract/master.svg
            :target: https://travis-ci.org/lipoja/URLExtract
            :alt: Build Status
        .. image:: https://img.shields.io/github/tag/lipoja/URLExtract.svg
            :target: https://github.com/lipoja/URLExtract/tags
            :alt: Git tag
        .. image:: https://img.shields.io/pypi/pyversions/urlextract.svg
            :target: https://pypi.python.org/pypi/urlextract
            :alt: Python Version Compatibility
        
        
        How does it work
        ~~~~~~~~~~~~~~~~
        
        It tries to find any occurrence of TLD in given text. If TLD is found it
        starts from that position to expand boundaries to both sides searching
        for "stop character" (usually whitespace, comma, single or double
        quote).
        
        NOTE: List of TLDs is downloaded from iana.org to keep you up to date with new TLDs.
        
        Installation
        ~~~~~~~~~~~~
        
        Package is available on PyPI - you can install it via pip.
        
        .. image:: https://img.shields.io/pypi/v/urlextract.svg
            :target: https://pypi.python.org/pypi/urlextract
        .. image:: https://img.shields.io/pypi/status/urlextract.svg
            :target: https://pypi.python.org/pypi/urlextract
        
        ::
        
           pip install urlextract
        
        Documentation
        ~~~~~~~~~~~~~
        
        Online documentation is published at http://urlextract.readthedocs.io/
        
        
        Requirements
        ~~~~~~~~~~~~
        
        - IDNA for converting links to IDNA format
        - uritools for domain name validation
        - appdirs for determining user's cache directory
        
           ::
        
               pip install idna
               pip install uritools
               pip install appdirs
        
        Example
        ~~~~~~~
        
        You can look at command line program at the end of *urlextract.py*.
        But everything you need to know is this:
        
        .. code:: python
        
            from urlextract import URLExtract
        
            extractor = URLExtract()
            urls = extractor.find_urls("Text with URLs. Let's have URL janlipovsky.cz as an example.")
            print(urls) # prints: ['janlipovsky.cz']
        
        Or you can get generator over URLs in text by:
        
        .. code:: python
        
            from urlextract import URLExtract
        
            extractor = URLExtract()
            example_text = "Text with URLs. Let's have URL janlipovsky.cz as an example."
        
            for url in extractor.gen_urls(example_text):
                print(url) # prints: ['janlipovsky.cz']
        
        Or you if you want to just check if there is at least one URL you can do:
        
        .. code:: python
        
            from urlextract import URLExtract
        
            extractor = URLExtract()
            example_text = "Text with URLs. Let's have URL janlipovsky.cz as an example."
        
            if extractor.has_urls(example_text):
                print("Given text contains some URL")
        
        If you want to have up to date list of TLDs you can use ``update()``:
        
        .. code:: python
        
            from urlextract import URLExtract
        
            extractor = URLExtract()
            extractor.update()
        
        or ``update_when_older()`` method:
        
        .. code:: python
        
            from urlextract import URLExtract
        
            extractor = URLExtract()
            extractor.update_when_older(7) # updates when list is older that 7 days
        
        Known issues
        ~~~~~~~~~~~~
        
        Since TLD can be not only shortcut but also some meaningful word we might see "false matches" when we are searching
        for URL in some HTML pages. The false match can occur for example in css or JS when you are referring to HTML item
        using its classes.
        
        Example HTML code:
        
        .. code-block:: html
        
          <p class="bold name">Jan</p>
          <style>
            p.bold.name {
              font-weight: bold;
            }
          </style>
        
        If this HTML snippet is on the input of ``urlextract.find_urls()`` it will return ``p.bold.name`` as an URL.
        Behavior of urlextract is correct, because ``.name`` is valid TLD and urlextract just see that there is ``bold.name``
        valid domain name and ``p`` is valid sub-domain.
        
        License
        ~~~~~~~
        
        This piece of code is licensed under The MIT License.
        
Keywords: url,extract,find,finder,collect,link,tld,list
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Topic :: Text Processing
Classifier: Topic :: Text Processing :: Markup :: HTML
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/x-rst
