C:\Ana\lib\site-packages\IPython\core\interactiveshell.py:3267: FutureWarning:
Panel isdeprecatedand will be removed in a future version.
The recommended way to represent these types of3-dimensional data are with a MultiIndex on a DataFrame, via the Panel.to_frame() method
Alternatively, you can use the xarray package http://xarray.pydata.org/en/stable/.
Pandas provides a `.to_xarray()` method to help automate this conversion.
exec(code_obj, self.user_global_ns, self.user_ns)
<class'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 5 (minor_axis)
Items axis: 0to1
Major_axis axis: 0to3
Minor_axis axis: 0to4
data = {'Item1' : pd.DataFrame(np.random.randn(4, 3)),
'Item2' : pd.DataFrame(np.random.randn(4, 2))}
p = pd.Panel(data)
p
C:\Ana\lib\site-packages\IPython\core\interactiveshell.py:3267: FutureWarning:
Panel isdeprecatedand will be removed in a future version.
The recommended way to represent these types of3-dimensional data are with a MultiIndex on a DataFrame, via the Panel.to_frame() method
Alternatively, you can use the xarray package http://xarray.pydata.org/en/stable/.
Pandas provides a `.to_xarray()` method to help automate this conversion.
exec(code_obj, self.user_global_ns, self.user_ns)
<class'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 0to3
Minor_axis axis: 0to2
#Create a Dictionary of series
d = {'Name':pd.Series(['Tom','James','Ricky','Vin','Steve','Smith','Jack',
'Lee','David','Gasper','Betina','Andres']),
'Age':pd.Series([25,26,25,23,30,29,23,34,40,30,51,46]),
'Rating':pd.Series([4.23,3.24,3.98,2.56,3.20,4.6,3.8,3.78,2.98,4.80,4.10,3.65])
}
#Create a DataFrame
df = pd.DataFrame(d)
df
Name
Age
Rating
0
Tom
25
4.23
1
James
26
3.24
2
Ricky
25
3.98
3
Vin
23
2.56
4
Steve
30
3.20
5
Smith
29
4.60
6
Jack
23
3.80
7
Lee
34
3.78
8
David
40
2.98
9
Gasper
30
4.80
10
Betina
51
4.10
11
Andres
46
3.65
df.sum() #默认是按行相加
Name TomJamesRickyVinSteveSmithJackLeeDavidGasperBe...
Age 382
Rating 44.92dtype:object
Help on method sort_index in module pandas.core.frame:
sort_index(axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, by=None) method of pandas.core.frame.DataFrame instance
Sort objectbylabels (along an axis)
Parameters
----------
axis : index, columns to direct sorting
level : intor level name or list of ints or list of level names
ifnot None, sort on values in specified index level(s)
ascending : boolean, default True
Sort ascending vs. descending
inplace : bool, default False
if True, perform operation in-place
kind : {'quicksort', 'mergesort', 'heapsort'}, default'quicksort'
Choice of sorting algorithm. See also ndarray.np.sort for more
information. `mergesort` is the only stable algorithm. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position : {'first', 'last'}, default'last'
`first` puts NaNs at the beginning, `last` puts NaNs at the end.
Not implemented for MultiIndex.
sort_remaining : bool, default True
iftrueand sorting by level and index is multilevel, sort by other
levels too (in order) after sorting by specified level
Returns
-------
sorted_obj : DataFrame
Help on method sort_values in module pandas.core.frame:
sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') method of pandas.core.frame.DataFrame instance
Sort by the values along either axis
Parameters
----------
by : strorlist of str
Name orlist of names to sort by.
- if `axis` is0or `'index'` then `by` may contain index
levels and/or column labels
- if `axis` is1or `'columns'` then `by` may contain column
levels and/or index labels
.. versionchanged:: 0.23.0
Allow specifying index or column level names.
axis : {0or'index', 1or'columns'}, default 0
Axis to be sorted
ascending : boolorlist of bool, default True
Sort ascending vs. descending. Specify listfor multiple sort
orders. If this is a list of bools, must match the length of
the by.
inplace : bool, default FalseifTrue, perform operation in-place
kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See also ndarray.np.sort for more
information. `mergesort` is the only stable algorithm. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position : {'first', 'last'}, default 'last'
`first` puts NaNs at the beginning, `last` puts NaNs at the end
Returns
-------
sorted_obj : DataFrame
Examples
--------
>>> df = pd.DataFrame({
... 'col1' : ['A', 'A', 'B', np.nan, 'D', 'C'],
... 'col2' : [2, 1, 9, 8, 7, 4],
... 'col3': [0, 1, 9, 4, 2, 3],
... })
>>> df
col1 col2 col3
0 A 201 A 112 B 993 NaN 844 D 725 C 43
Sort by col1
>>> df.sort_values(by=['col1'])
col1 col2 col3
0 A 201 A 112 B 995 C 434 D 723 NaN 84
Sort by multiple columns
>>> df.sort_values(by=['col1', 'col2'])
col1 col2 col3
1 A 110 A 202 B 995 C 434 D 723 NaN 84
Sort Descending
>>> df.sort_values(by='col1', ascending=False)
col1 col2 col3
4 D 725 C 432 B 990 A 201 A 113 NaN 84
Putting NAs first
>>> df.sort_values(by='col1', ascending=False, na_position='first')
col1 col2 col3
3 NaN 844 D 725 C 432 B 990 A 201 A 11
Help on method cat inmodule pandas.core.strings:
cat(others=None, sep=None, na_rep=None, join=None) method of pandas.core.strings.StringMethods instance
Concatenate strings in the Series/Indexwith given separator.
If`others` is specified, thisfunction concatenates the Series/Index
and elements of`others` element-wise.
If`others` is not passed, then all values in the Series/Index are
concatenated into a single stringwith a given `sep`.
Parameters
----------
others : Series, Index, DataFrame, np.ndarrary or list-like
Series, Index, DataFrame, np.ndarray (one- or two-dimensional) and
other list-likes of strings must have the same length as the
calling Series/Index, with the exception of indexed objects (i.e.
Series/Index/DataFrame) if`join` is not None.
If others is a list-like that contains a combination ofSeries,
np.ndarray (1-dim) or list-like, then all elements will be unpacked
and must satisfy the above criteria individually.
If others is None, the method returns the concatenation of all
strings in the calling Series/Index.
sep : string or None, defaultNoneIfNone, concatenates without any separator.
na_rep : string or None, defaultNoneRepresentation that is inserted for all missing values:
- If`na_rep` is None, and `others` is None, missing values in the
Series/Index are omitted from the result.
- If`na_rep` is None, and `others` is not None, a row containing a
missing value inanyof the columns (before concatenation) will
have a missing value in the result.
join : {'left', 'right', 'outer', 'inner'}, defaultNoneDetermines the join-style between the calling Series/Index and anySeries/Index/DataFramein`others` (objects without an index need
to match the length of the calling Series/Index). IfNone,
alignment is disabled, but this option will be removed in a future
version of pandas and replaced with a defaultof`'left'`. To
disable alignment, use `.values` on anySeries/Index/DataFramein`others`.
.. versionadded:: 0.23.0Returns
-------
concat : str or Series/Indexof objects
If`others` is None, `str` is returned, otherwise a `Series/Index`
(same typeas caller) of objects is returned.
SeeAlso
--------
split : Split each stringin the Series/IndexExamples
--------
When not passing `others`, all values are concatenated into a single
string:
>>> s = pd.Series(['a', 'b', np.nan, 'd'])
>>> s.str.cat(sep=' ')
'a b d'Bydefault, NA values in the Series are ignored. Using`na_rep`, they
can be given a representation:
>>> s.str.cat(sep=' ', na_rep='?')
'a b ? d'If`others` is specified, corresponding values are concatenated with
the separator. Result will be a Seriesof strings.
>>> s.str.cat(['A', 'B', 'C', 'D'], sep=',')
0 a,A
1 b,B
2NaN3 d,D
dtype: objectMissing values will remain missing in the result, but can again be
represented using `na_rep`
>>> s.str.cat(['A', 'B', 'C', 'D'], sep=',', na_rep='-')
0 a,A
1 b,B
2 -,C
3 d,D
dtype: objectIf`sep` is not specified, the values are concatenated without
separation.
>>> s.str.cat(['A', 'B', 'C', 'D'], na_rep='-')
0 aA
1 bB
2 -C
3 dD
dtype: objectSerieswith different indexes can be aligned before concatenation. The`join`-keyword works asin other methods.
>>> t = pd.Series(['d', 'a', 'e', 'c'], index=[3, 0, 4, 2])
>>> s.str.cat(t, join=None, na_rep='-')
0 ad
1 ba
2 -e
3 dc
dtype: object
>>>
>>> s.str.cat(t, join='left', na_rep='-')
0 aa
1 b-
2 -c
3 dd
dtype: object
>>>
>>> s.str.cat(t, join='outer', na_rep='-')
0 aa
1 b-
2 -c
3 dd
4 -e
dtype: object
>>>
>>> s.str.cat(t, join='inner', na_rep='-')
0 aa
2 -c
3 dd
dtype: object
>>>
>>> s.str.cat(t, join='right', na_rep='-')
3 dd
0 aa
4 -e
2 -c
dtype: objectFor more examples, see :ref:`here <text.concatenate>`.
s.str.get_dummies()
s.str.get_dummies()
William Rick
Alber@t
John
Tom
0
0
0
0
1
1
1
0
0
0
2
0
0
1
0
3
0
1
0
0
s = pd.Series(['Tom ', ' William Rick', 'Alber@t', 'John'])
s.str.get_dummies()
William Rick
Alber@t
John
Tom
0
0
0
0
1
1
1
0
0
0
2
0
1
0
0
3
0
0
1
0
所以是以DataFrame的形式来表示各个字符出现的顺序?
s.str.contains()
s.str.contains(' ')
0True1True2False3False
dtype: bool
s.str.contains('o')
0True1False2False3True
dtype: bool
s.str.replace()
s.str.replace('@', '$')
0 Tom
1 William Rick
2 Alber$t3 John
dtype: object
s.str.repeat()
s = pd.Series(['Tom ', ' William Rick', 'Alber@t', 'John', np.nan])
s.str.repeat(3)
0 Tom Tom Tom
1 William Rick William Rick William Rick
2 Alber@tAlber@tAlber@t3 JohnJohnJohn
dtype: object
s.str.repeat([1, 2, 3, 4])
0 Tom
1 William Rick William Rick
2 Alber@tAlber@tAlber@t3 JohnJohnJohnJohn
dtype: object
display.max_rows : int
If max_rows is exceeded, switch to truncate view. Depending on
`large_repr`, objects are either centrally truncated or printed as
a summary view. 'None'value means unlimited.
In case python/IPython is running in a terminal and `large_repr`
equals'truncate'this can be set to 0and pandas will auto-detect
the height of the terminal and print a truncated object which fits
the screen height. The IPython notebook, IPython qtconsole, or
IDLE donot run in a terminal and hence it isnot possible to do
correct auto-detection.
[default: 60] [currently: 60]
option_context()
with pd.option_context("display.max_rows", 10):
print(pd.get_option("display.max_rows"))
print(pd.get_option("display.max_rows"))
Help on method pct_change in module pandas.core.generic:
pct_change(periods=1, fill_method='pad', limit=None, freq=None, **kwargs) method of pandas.core.frame.DataFrame instance
Percentage change between the current and a prior element.
Computes the percentage change from the immediately previous row bydefault. This is useful in comparing the percentage of change in a time
series of elements.
Parameters
----------
periods : int, default1
Periods to shift for forming percent change.
fill_method : str, default'pad'
How to handle NAs before computing percent changes.
limit : int, default None
The number of consecutive NAs to fill before stopping.
freq : DateOffset, timedelta, or offset aliasstring, optional
Increment to use from time series API (e.g. 'M'or BDay()).
**kwargs
Additional keyword arguments are passed into
`DataFrame.shift` or `Series.shift`.
Returns
-------
chg : Series or DataFrame
The same type as the calling object.
See Also
--------
Series.diff : Compute the difference of two elements in a Series.
DataFrame.diff : Compute the difference of two elements in a DataFrame.
Series.shift : Shift the index by some number of periods.
DataFrame.shift : Shift the index by some number of periods.
Examples
--------
**Series**
>>> s = pd.Series([90, 91, 85])
>>> s
090191285
dtype: int64
>>> s.pct_change()
0 NaN
10.0111112-0.065934
dtype: float64
>>> s.pct_change(periods=2)
0 NaN
1 NaN
2-0.055556
dtype: float64
See the percentage change in a Series where filling NAs with last
valid observation forward to next valid.
>>> s = pd.Series([90, 91, None, 85])
>>> s
090.0191.02 NaN
385.0
dtype: float64
>>> s.pct_change(fill_method='ffill')
0 NaN
10.01111120.0000003-0.065934
dtype: float64
**DataFrame**
Percentage change in French franc, Deutsche Mark, and Italian lira from1980-01-01 to 1980-03-01.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
FR GR IT
1980-01-014.04051.7246804.741980-02-014.09631.7482810.011980-03-014.31491.8519860.13
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-010.0138100.0136840.0065491980-03-010.0533650.0593180.061876
Percentage of change in GOOG and APPL stock volume. Shows computing
the percentage change between columns.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
... '2014': [1371819, 41403351]},
... index=['GOOG', 'APPL'])
>>> df
201620152014
GOOG 176995015009231371819
APPL 305862654091231641403351
>>> df.pct_change(axis='columns')
201620152014
GOOG NaN -0.151997-0.086016
APPL NaN 0.3376040.012002
Help on method fillna inmodule pandas.core.frame:
fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs) method of pandas.core.frame.DataFrame instance
Fill NA/NaN values using the specified method
Parameters
----------
value : scalar, dict, Series, or DataFrame
Value touseto fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value tousefor
each index (for a Series) or column (for a DataFrame). (values notin the dict/Series/DataFrame will not be filled). This value cannot
be a list.
method : {'backfill', 'bfill', 'pad', 'ffill', None}, defaultNone
Method tousefor filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill:use NEXT valid observation to fill gap
axis : {0or'index', 1or'columns'}
inplace : boolean, default False
If True, fill in place. Note: this will modify any
other views on this object, (e.g. a no-copy slice for a column in a
DataFrame).
limit :int, defaultNone
If method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0ifnotNone.
downcast:dict, default is None
a dictof item->dtype of what todowncastif possible,
or the string 'infer' which will trytodowncastto an appropriate
equal type (e.g. float64 to int64 if possible)
See Also
--------
interpolate : Fill NaN values using interpolation.
reindex, asfreq
Returns
-------
filled : DataFrame
Examples
-------->>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
... [3, 4, np.nan, 1],
... [np.nan, np.nan, np.nan, 5],
... [np.nan, 3, np.nan, 4]],
... columns=list('ABCD'))
>>> df
A B C D
0 NaN 2.0 NaN 013.04.0 NaN 12 NaN NaN NaN 53 NaN 3.0 NaN 4
Replace all NaN elements with0s.
>>> df.fillna(0)
A B C D
00.02.00.0013.04.00.0120.00.00.0530.03.00.04
We can also propagate non-null values forward or backward.
>>> df.fillna(method='ffill')
A B C D
0 NaN 2.0 NaN 013.04.0 NaN 123.04.0 NaN 533.03.0 NaN 4
Replace all NaN elements in column 'A', 'B', 'C', and'D', with0, 1,
2, and3 respectively.
>>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3}
>>> df.fillna(value=values)
A B C D
00.02.02.0013.04.02.0120.01.02.0530.03.02.04
Only replace the first NaN element.
>>> df.fillna(value=values, limit=1)
A B C D
00.02.02.0013.04.0 NaN 12 NaN 1.0 NaN 53 NaN 3.0 NaN 4
Help on method replace in module pandas.core.frame:
replace(to_replace=None, value=None, inplace=False, limit=None, regex=False, method='pad') method of pandas.core.frame.DataFrame instance
Replace values given in `to_replace` with `value`.
Values of the DataFrame are replaced with other values dynamically.
This differs from updating with ``.loc`` or ``.iloc``, which require
you to specify a location to update with some value.
Parameters
----------
to_replace : str, regex, list, dict, Series, int, float, or None
How to find the values that will be replaced.
* numeric, str or regex:
- numeric: numeric values equal to `to_replace` will be
replaced with `value`
- str: string exactly matching `to_replace` will be replaced
with `value`
- regex: regexs matching `to_replace` will be replaced with
`value`
* list of str, regex, or numeric:
- First, if `to_replace` and `value` are both lists, they
**must** be the same length.
- Second, if ``regex=True`` then all of the strings in **both**
lists will be interpreted as regexs otherwise they will match
directly. This doesn't matter much for `value` since there
are only a few possible substitution regexes you can use.
- str, regex and numeric rules apply as above.
* dict:
- Dicts can be used to specify different replacement values
for different existing values. For example,
``{'a': 'b', 'y': 'z'}`` replaces the value 'a' with 'b'and
'y' with 'z'. To use a dict in this way the `value`
parameter should be `None`.
- For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
``{'a': 1, 'b': 'z'}`` looks for the value 1 in column 'a'and the value 'z' in column 'b'and replaces these values
with whatever is specified in `value`. The `value` parameter
should not be ``None`` in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
- For a DataFrame nested dictionaries, e.g.,
``{'a': {'b': np.nan}}``, are read as follows: look in column
'a' for the value 'b'and replace it with NaN. The `value`
parameter should be ``None`` to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) **cannot** be regular expressions.
* None:
- This means that the `regex` argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If `value` is also ``None`` then
this **must** be a nested dictionary or Series.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching `to_replace` with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplace : boolean, default False
If True, in place. Note: this will modify any
other views on this object (e.g. a column from a DataFrame).
Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill.
regex : bool or same types as `to_replace`, default False
Whether to interpret `to_replace` and/or `value` as regular
expressions. If this is ``True`` then `to_replace` *must* be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
`to_replace` must be ``None``.
method : {'pad', 'ffill', 'bfill', `None`}
The method to use when for replacement, when `to_replace` is a
scalar, list or tuple and `value` is ``None``.
.. versionchanged:: 0.23.0Added to DataFrame.
See Also
--------
DataFrame.fillna : Fill NA values
DataFrame.where : Replace values based on boolean condition
Series.str.replace : Simple string replacement.
Returns
-------
DataFrame
Object after replacement.
Raises
------
AssertionError
* If `regex` is not a ``bool`` and `to_replace` is not
``None``.
TypeError
* If `to_replace` is a ``dict`` and `value` is not a ``list``,
``dict``, ``ndarray``, or ``Series``
* If `to_replace` is ``None`` and `regex` is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
* When replacing multiple ``bool`` or ``datetime64`` objects and
the arguments to `to_replace` does not match the type of the
value being replaced
ValueError
* If a ``list`` or an ``ndarray`` is passed to `to_replace` and
`value` but they are not the same length.
Notes
-----
* Regex substitution is performed under the hood with ``re.sub``. The
rules for substitution for ``re.sub`` are the same.
* Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers *are* strings, then you can do this.
* This method has *a lot* of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
* When dict is used as the `to_replace` value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
--------
**Scalar `to_replace` and `value`**
>>> s = pd.Series([0, 1, 2, 3, 4])
>>> s.replace(0, 5)
0511223344 dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
055 a
116b
227 c
338 d
449 e
**List-like `to_replace`**
>>> df.replace([0, 1, 2, 3], 4)
A B C
045 a
146b
247 c
348 d
449 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
045 a
136b
227 c
318 d
449 e
>>> s.replace([1, 2], method='bfill')
0013233344 dtype: int64
**dict-like `to_replace`**
>>> df.replace({0: 10, 1: 100})
A B C
0105 a
11006b
227 c
338 d
449 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0100100 a
116b
227 c
338 d
449 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
01005 a
116b
227 c
338 d
44009 e
**Regular expression `to_replace`**
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2bait xyz
>>> df.replace(regex={r'^ba.$':'new', 'foo':'xyz'})
A B
0 new abc
1 xyz new
2bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2bait xyz
Note that when replacing multiple ``bool`` or ``datetime64`` objects,
the data types in the `to_replace` parameter must match the data
type of the value being replaced:
>>> df = pd.DataFrame({'A': [True, False, True],
... 'B': [False, True, False]})
>>> df.replace({'a string': 'new value', True: False}) # raises
Traceback (most recent call last):
...
TypeError: Cannot compare types 'ndarray(dtype=bool)'and 'str'
This raises a ``TypeError`` because one of the ``dict`` keys is not of
the correct type for replacement.
Compare the behavior of ``s.replace({'a': None})`` and
``s.replace('a', None)`` to understand the pecularities
of the `to_replace` parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the `to_replace` value, it is like the
value(s) in the dict are equal to the `value` parameter.
``s.replace({'a': None})`` is equivalent to
``s.replace(to_replace={'a': None}, value=None, method=None)``:
>>> s.replace({'a': None})
0101 None
2 None
3b
4 None
dtype: object
When ``value=None`` and `to_replace` is a scalar, list or
tuple, `replace` uses the method parameter (default 'pad') to do the
replacement. So this is why the 'a' values are being replaced by 10
in rows 1and 2and 'b' in row 4 in this case.
The command ``s.replace('a', None)`` is actually equivalent to
``s.replace(to_replace='a', value=None, method='pad')``:
>>> s.replace('a', None)
0101102103b
4b
dtype: object
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· 【自荐】一款简洁、开源的在线白板工具 Drawnix