HOME


Mini Shell 1.0
DIR: /usr/lib/python3/dist-packages/pygments/formatters/__pycache__/
Upload File :
Current File : //usr/lib/python3/dist-packages/pygments/formatters/__pycache__/other.cpython-312.pyc
�

|�e����dZddlmZddlmZddlmZddlmZgd�Z	Gd�de�Z
Gd	�d
e�ZdZdZ
Gd
�de�Zy)z�
    pygments.formatters.other
    ~~~~~~~~~~~~~~~~~~~~~~~~~

    Other formatters: NullFormatter, RawTokenFormatter.

    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�)�	Formatter)�get_choice_opt)�Token)�colorize)�
NullFormatter�RawTokenFormatter�TestcaseFormatterc�(�eZdZdZdZddgZdgZd�Zy)rz;
    Output the text unchanged without any formatting.
    z	Text only�text�nullz*.txtc��|j}|D]9\}}|r!|j|j|���)|j|��;y�N)�encoding�write�encode)�self�tokensource�outfile�enc�ttype�values      �;/usr/lib/python3/dist-packages/pygments/formatters/other.py�formatzNullFormatter.formatsB���m�m��'�	%�L�E�5���
�
�e�l�l�3�/�0��
�
�e�$�		%�N)�__name__�
__module__�__qualname__�__doc__�name�aliases�	filenamesr�rrrrs$����D��v��G��	�I�%rrc�2�eZdZdZdZddgZdgZdZd�Zd�Z	y	)
ra}
    Format tokens as a raw representation for storing token streams.

    The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
    be converted to a token stream with the `RawTokenLexer`, described in the
    :doc:`lexer list <lexers>`.

    Only two options are accepted:

    `compress`
        If set to ``'gz'`` or ``'bz2'``, compress the output with the given
        compression algorithm after encoding (default: ``''``).
    `error_color`
        If set to a color name, highlight error tokens using that color.  If
        set but with no value, defaults to ``'red'``.

        .. versionadded:: 0.11

    z
Raw tokens�raw�tokensz*.rawFc�R�tj|fi|��d|_t|dgd�d�|_|jdd�|_|jdurd|_|j�	t|jd�yy#t$rtd|jz��wxYw)	N�ascii�compress)��none�gz�bz2r)�error_colorT�redzInvalid color %r specified)
r�__init__rrr(�getr-r�KeyError�
ValueError�r�optionss  rr/zRawTokenFormatter.__init__>s������4�+�7�+� ��
�&�w�
�'@�"�F��
�"�;�;�}�d�;������t�#�$�D�����'�
3���)�)�2�.�(���
3� �!=�!%�!1�!1�"2�3�3�
3�s�,B�"B&c�T��
�	�jd�|jdk(r1ddl}|j	ddd����j}�j
}nI|jdk(r"ddl}|jd��
�
�fd	�}�
�fd
�}n�j}�j}|jrI|D]C\}}d||fz}	|tjur|t|j|	���<||	��En|D]\}}|d||fz��|�y#t$rtd��wxYw)Nrz3The raw tokens formatter needs a binary output filer+rr)�wb�	r,c�F���j�j|��yr)rr()r�
compressorrs ��rrz'RawTokenFormatter.format.<locals>.write`s����
�
�j�1�1�$�7�8rc�d���j�j���j�yr)r�flush)r9rs��rr;z'RawTokenFormatter.format.<locals>.flushcs!����
�
�j�.�.�0�1��
�
�rs%r	%r
)
r�	TypeErrorr(�gzip�GzipFile�closer,�
BZ2Compressorr;r-r�Errorr)rrrr=rr;r,rr�liner9s  `       @rrzRawTokenFormatter.formatPs)���	+��M�M�#���=�=�D� ���m�m�B��a��9�G��M�M�E��M�M�E�
�]�]�e�
#���*�*�1�-�J�
9�
 ��M�M�E��M�M�E���� +�
 ���u�"�e�U�^�3���E�K�K�'��(�4�#3�#3�T�:�;��$�K�
 �!,�
4���u��k�U�E�N�2�3�
4�
���C�	+��*�+�
+�	+�s�D�D'N)
rrrrrr r!�
unicodeoutputr/rr"rrrr$s.���&�D��h��G��	�I��M�3�$$rrzG    def testNeedsName(lexer):
        fragment = %r
        tokens = [
zD        ]
        assert list(lexer.get_tokens(fragment)) == tokens
c�&�eZdZdZdZdgZd�Zd�Zy)r	zU
    Format tokens as appropriate for a new testcase.

    .. versionadded:: 2.0
    �Testcase�testcasec�~�tj|fi|��|j�|jdk7rtd��yy)N�utf-8z*Only None and utf-8 are allowed encodings.)rr/rr2r3s  rr/zTestcaseFormatter.__init__�s?�����4�+�7�+��=�=�$����'�)A��I�J�J�*B�$rc	��d}g}g}|D]0\}}|j|�|j|�d|�d|�d���2tdj|�fz}dj|�}	t}
|j�|j||	z|
z�n`|j|j
d��|j|	j
d��|j|
j
d��|j�y)Nz            �(z, z),
r)rH)�append�TESTCASE_BEFORE�join�TESTCASE_AFTERrrrr;)rrr�indentation�rawbuf�outbufrr�before�during�afters           rrzTestcaseFormatter.format�s���������'�	I�L�E�5��M�M�%� ��M�M�[�%��G�H�	I�!�B�G�G�F�O�#5�5�����������=�=� ��M�M�&�6�/�E�1�2��M�M�&�-�-��0�1��M�M�&�-�-��0�1��M�M�%�,�,�w�/�0��
�
�rN)rrrrrr r/rr"rrr	r	�s ���
�D��l�G�K�
rr	N)r�pygments.formatterr�
pygments.utilr�pygments.tokenr�pygments.consoler�__all__rrrLrNr	r"rr�<module>rZsR���)�(� �%�
E��%�I�%�"P�	�P�f��
���	�r