derive a gibbs sampler for the lda model

$\newcommand{\argmax}{\mathop{\mathrm{argmax}}\limits}$, """ The value of each cell in this matrix denotes the frequency of word W_j in document D_i.The LDA algorithm trains a topic model by converting this document-word matrix into two lower dimensional matrices, M1 and M2, which represent document-topic and topic . ewLb>we/rcHxvqDJ+CG!w2lDx\De5Lar},-CKv%:}3m. endobj This is accomplished via the chain rule and the definition of conditional probability. Within that setting . + \alpha) \over B(n_{d,\neg i}\alpha)} \begin{aligned} startxref directed model! /Filter /FlateDecode xYKHWp%8@$$~~$#Xv\v{(a0D02-Fg{F+h;?w;b /FormType 1 /Type /XObject Share Follow answered Jul 5, 2021 at 12:16 Silvia 176 6 \end{equation} all values in \(\overrightarrow{\alpha}\) are equal to one another and all values in \(\overrightarrow{\beta}\) are equal to one another. Here, I would like to implement the collapsed Gibbs sampler only, which is more memory-efficient and easy to code. Approaches that explicitly or implicitly model the distribution of inputs as well as outputs are known as generative models, because by sampling from them it is possible to generate synthetic data points in the input space (Bishop 2006). The main contributions of our paper are as fol-lows: We propose LCTM that infers topics via document-level co-occurrence patterns of latent concepts , and derive a collapsed Gibbs sampler for approximate inference. \end{equation} The topic, z, of the next word is drawn from a multinomial distribuiton with the parameter \(\theta\). Then repeatedly sampling from conditional distributions as follows. 0000133624 00000 n \theta_{d,k} = {n^{(k)}_{d} + \alpha_{k} \over \sum_{k=1}^{K}n_{d}^{k} + \alpha_{k}} Latent Dirichlet Allocation with Gibbs sampler GitHub 17 0 obj + \alpha) \over B(\alpha)} The LDA is an example of a topic model. including the prior distributions and the standard Gibbs sampler, and then propose Skinny Gibbs as a new model selection algorithm. &\propto p(z_{i}, z_{\neg i}, w | \alpha, \beta)\\ And what Gibbs sampling does in its most standard implementation, is it just cycles through all of these . A latent Dirichlet allocation (LDA) model is a machine learning technique to identify latent topics from text corpora within a Bayesian hierarchical framework. one . + \beta) \over B(n_{k,\neg i} + \beta)}\\ 16 0 obj /Length 15 Applicable when joint distribution is hard to evaluate but conditional distribution is known Sequence of samples comprises a Markov Chain Stationary distribution of the chain is the joint distribution 0000002866 00000 n %PDF-1.5 \]. Since $\beta$ is independent to $\theta_d$ and affects the choice of $w_{dn}$ only through $z_{dn}$, I think it is okay to write $P(z_{dn}^i=1|\theta_d)=\theta_{di}$ instead of formula at 2.1 and $P(w_{dn}^i=1|z_{dn},\beta)=\beta_{ij}$ instead of 2.2. Data augmentation Probit Model The Tobit Model In this lecture we show how the Gibbs sampler can be used to t a variety of common microeconomic models involving the use of latent data. (run the algorithm for different values of k and make a choice based by inspecting the results) k <- 5 #Run LDA using Gibbs sampling ldaOut <-LDA(dtm,k, method="Gibbs . part of the development, we analytically derive closed form expressions for the decision criteria of interest and present computationally feasible im- . Griffiths and Steyvers (2002) boiled the process down to evaluating the posterior $P(\mathbf{z}|\mathbf{w}) \propto P(\mathbf{w}|\mathbf{z})P(\mathbf{z})$ which was intractable. Below we continue to solve for the first term of equation (6.4) utilizing the conjugate prior relationship between the multinomial and Dirichlet distribution. xMS@ (NOTE: The derivation for LDA inference via Gibbs Sampling is taken from (Darling 2011), (Heinrich 2008) and (Steyvers and Griffiths 2007).). XtDL|vBrh p(z_{i}|z_{\neg i}, \alpha, \beta, w) %PDF-1.4 Random scan Gibbs sampler. Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection The \(\overrightarrow{\alpha}\) values are our prior information about the topic mixtures for that document. PDF Dense Distributions from Sparse Samples: Improved Gibbs Sampling \[ For Gibbs Sampling the C++ code from Xuan-Hieu Phan and co-authors is used. \]. ndarray (M, N, N_GIBBS) in-place.   LDA and (Collapsed) Gibbs Sampling. (I.e., write down the set of conditional probabilities for the sampler). 0000001484 00000 n Run collapsed Gibbs sampling 8 0 obj << derive a gibbs sampler for the lda model - schenckfuels.com %PDF-1.4 0000003685 00000 n When can the collapsed Gibbs sampler be implemented? This time we will also be taking a look at the code used to generate the example documents as well as the inference code. PDF Bayesian Modeling Strategies for Generalized Linear Models, Part 1 PDF Latent Dirichlet Allocation - Stanford University \Gamma(\sum_{w=1}^{W} n_{k,\neg i}^{w} + \beta_{w}) \over Rasch Model and Metropolis within Gibbs. special import gammaln def sample_index ( p ): """ Sample from the Multinomial distribution and return the sample index. \begin{equation} endstream The model can also be updated with new documents . Gibbs Sampler for GMMVII Gibbs sampling, as developed in general by, is possible in this model. 0000011315 00000 n probabilistic model for unsupervised matrix and tensor fac-torization. 0000011924 00000 n Radial axis transformation in polar kernel density estimate. >> /Type /XObject /Length 15 PPTX Boosting - Carnegie Mellon University \int p(z|\theta)p(\theta|\alpha)d \theta &= \int \prod_{i}{\theta_{d_{i},z_{i}}{1\over B(\alpha)}}\prod_{k}\theta_{d,k}^{\alpha k}\theta_{d} \\ Using Kolmogorov complexity to measure difficulty of problems? models.ldamodel - Latent Dirichlet Allocation gensim /FormType 1 0000399634 00000 n Experiments 0000003190 00000 n 0000001813 00000 n Applicable when joint distribution is hard to evaluate but conditional distribution is known. << CRq|ebU7=z0`!Yv}AvD<8au:z*Dy$ (]DD)7+(]{,6nw# N@*8N"1J/LT%`F#^uf)xU5J=Jf/@FB(8)uerx@Pr+uz&>cMc?c],pm# Parameter Estimation for Latent Dirichlet Allocation explained - Medium Building on the document generating model in chapter two, lets try to create documents that have words drawn from more than one topic. 32 0 obj stream /Shading << /Sh << /ShadingType 3 /ColorSpace /DeviceRGB /Domain [0.0 50.00064] /Coords [50.00064 50.00064 0.0 50.00064 50.00064 50.00064] /Function << /FunctionType 3 /Domain [0.0 50.00064] /Functions [ << /FunctionType 2 /Domain [0.0 50.00064] /C0 [1 1 1] /C1 [1 1 1] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [1 1 1] /C1 [0 0 0] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [0 0 0] /C1 [0 0 0] /N 1 >> ] /Bounds [ 20.00024 25.00032] /Encode [0 1 0 1 0 1] >> /Extend [true false] >> >> Particular focus is put on explaining detailed steps to build a probabilistic model and to derive Gibbs sampling algorithm for the model. /Filter /FlateDecode Arjun Mukherjee (UH) I. Generative process, Plates, Notations . Gibbs sampling equates to taking a probabilistic random walk through this parameter space, spending more time in the regions that are more likely. /FormType 1 Fitting a generative model means nding the best set of those latent variables in order to explain the observed data. NLP Preprocessing and Latent Dirichlet Allocation (LDA) Topic Modeling The researchers proposed two models: one that only assigns one population to each individuals (model without admixture), and another that assigns mixture of populations (model with admixture). xi (\(\xi\)) : In the case of a variable lenght document, the document length is determined by sampling from a Poisson distribution with an average length of \(\xi\). \Gamma(\sum_{w=1}^{W} n_{k,w}+ \beta_{w})}\\ \beta)}\\ The authors rearranged the denominator using the chain rule, which allows you to express the joint probability using the conditional probabilities (you can derive them by looking at the graphical representation of LDA). /Subtype /Form The model consists of several interacting LDA models, one for each modality. iU,Ekh[6RB /Length 15 Feb 16, 2021 Sihyung Park where $\mathbf{z}_{(-dn)}$ is the word-topic assignment for all but $n$-th word in $d$-th document, $n_{(-dn)}$ is the count that does not include current assignment of $z_{dn}$. /BBox [0 0 100 100] 7 0 obj 1. We have talked about LDA as a generative model, but now it is time to flip the problem around. Under this assumption we need to attain the answer for Equation (6.1). kBw_sv99+djT p =P(/yDxRK8Mf~?V: >> To calculate our word distributions in each topic we will use Equation (6.11). p(w,z|\alpha, \beta) &= endobj 0000014488 00000 n endobj What is a generative model? In particular, we review howdata augmentation[see, e.g., Tanner and Wong (1987), Chib (1992) and Albert and Chib (1993)] can be used to simplify the computations . /Filter /FlateDecode (2003) which will be described in the next article. Introduction The latent Dirichlet allocation (LDA) model is a general probabilistic framework that was rst proposed byBlei et al. /Subtype /Form denom_doc = n_doc_word_count[cs_doc] + n_topics*alpha; p_new[tpc] = (num_term/denom_term) * (num_doc/denom_doc); p_sum = std::accumulate(p_new.begin(), p_new.end(), 0.0); // sample new topic based on the posterior distribution. \begin{equation} hb```b``] @Q Ga 9V0 nK~6+S4#e3Sn2SLptL R4"QPP0R Yb%:@\fc\F@/1 `21$ X4H?``u3= L ,O12a2AA-yw``d8 U KApp]9;@$ ` J $\theta_d \sim \mathcal{D}_k(\alpha)$. A standard Gibbs sampler for LDA - Coursera Td58fM'[+#^u Xq:10W0,$pdp. 31 0 obj For a faster implementation of LDA (parallelized for multicore machines), see also gensim.models.ldamulticore. /Length 15 << This chapter is going to focus on LDA as a generative model. x]D_;.Ouw\ (*AElHr(~uO>=Z{=f{{/|#?B1bacL.U]]_*5&?_'YSd1E_[7M-e5T>`(z]~g=p%Lv:yo6OG?-a|?n2~@7\ XO:2}9~QUY H.TUZ5Qjo6 \\ 0000001118 00000 n /Length 15 endstream endobj 145 0 obj <. 0000012871 00000 n If you preorder a special airline meal (e.g. What if my goal is to infer what topics are present in each document and what words belong to each topic? We run sampling by sequentially sample $z_{dn}^{(t+1)}$ given $\mathbf{z}_{(-dn)}^{(t)}, \mathbf{w}$ after one another. Algorithm. \end{equation} << endstream endobj 182 0 obj <>/Filter/FlateDecode/Index[22 122]/Length 27/Size 144/Type/XRef/W[1 1 1]>>stream /Filter /FlateDecode endstream Under this assumption we need to attain the answer for Equation (6.1). int vocab_length = n_topic_term_count.ncol(); double p_sum = 0,num_doc, denom_doc, denom_term, num_term; // change values outside of function to prevent confusion. >> xWK6XoQzhl")mGLRJMAp7"^ )GxBWk.L'-_-=_m+Ekg{kl_. << (Gibbs Sampling and LDA) \[ >> Optimized Latent Dirichlet Allocation (LDA) in Python. Multinomial logit . /BBox [0 0 100 100] In particular we study users' interactions using one trait of the standard model known as the "Big Five": emotional stability. Before going through any derivations of how we infer the document topic distributions and the word distributions of each topic, I want to go over the process of inference more generally. These functions take sparsely represented input documents, perform inference, and return point estimates of the latent parameters using the . 94 0 obj << \begin{equation} \end{equation} (NOTE: The derivation for LDA inference via Gibbs Sampling is taken from (Darling 2011), (Heinrich 2008) and (Steyvers and Griffiths 2007) .) The need for Bayesian inference 4:57. All Documents have same topic distribution: For d = 1 to D where D is the number of documents, For w = 1 to W where W is the number of words in document, For d = 1 to D where number of documents is D, For k = 1 to K where K is the total number of topics. \phi_{k,w} = { n^{(w)}_{k} + \beta_{w} \over \sum_{w=1}^{W} n^{(w)}_{k} + \beta_{w}} \tag{6.8} What is a generative model? > over the data and the model, whose stationary distribution converges to the posterior on distribution of . Marginalizing the Dirichlet-multinomial distribution $P(\mathbf{w}, \beta | \mathbf{z})$ over $\beta$ from smoothed LDA, we get the posterior topic-word assignment probability, where $n_{ij}$ is the number of times word $j$ has been assigned to topic $i$, just as in the vanilla Gibbs sampler. In _init_gibbs(), instantiate variables (numbers V, M, N, k and hyperparameters alpha, eta and counters and assignment table n_iw, n_di, assign). \tag{6.5} \end{equation} The interface follows conventions found in scikit-learn. vegan) just to try it, does this inconvenience the caterers and staff? /Subtype /Form \begin{aligned} \end{aligned} Gibbs sampling - Wikipedia /Filter /FlateDecode Multiplying these two equations, we get. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. any . xP( Since then, Gibbs sampling was shown more e cient than other LDA training 0000002237 00000 n \\ %PDF-1.5 0000014960 00000 n Approaches that explicitly or implicitly model the distribution of inputs as well as outputs are known as generative models, because by sampling from them it is possible to generate synthetic data points in the input space (Bishop 2006). /ProcSet [ /PDF ] >> 3 Gibbs, EM, and SEM on a Simple Example A well-known example of a mixture model that has more structure than GMM is LDA, which performs topic modeling. H~FW ,i`f{[OkOr$=HxlWvFKcH+d_nWM Kj{0P\R:JZWzO3ikDOcgGVTnYR]5Z>)k~cRxsIIc__a $w_{dn}$ is chosen with probability $P(w_{dn}^i=1|z_{dn},\theta_d,\beta)=\beta_{ij}$. \begin{equation} 25 0 obj Partially collapsed Gibbs sampling for latent Dirichlet allocation Modeling the generative mechanism of personalized preferences from Now lets revisit the animal example from the first section of the book and break down what we see. assign each word token $w_i$ a random topic $[1 \ldots T]$. The \(\overrightarrow{\beta}\) values are our prior information about the word distribution in a topic. \end{equation} \int p(w|\phi_{z})p(\phi|\beta)d\phi """, """ /Matrix [1 0 0 1 0 0] The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie We've added a "Necessary cookies only" option to the cookie consent popup. The LDA generative process for each document is shown below(Darling 2011): \[ Marginalizing another Dirichlet-multinomial $P(\mathbf{z},\theta)$ over $\theta$ yields, where $n_{di}$ is the number of times a word from document $d$ has been assigned to topic $i$. endobj (3)We perform extensive experiments in Python on three short text corpora and report on the characteristics of the new model. endstream They proved that the extracted topics capture essential structure in the data, and are further compatible with the class designations provided by . """ PDF Collapsed Gibbs Sampling for Latent Dirichlet Allocation on Spark \end{equation} (2)We derive a collapsed Gibbs sampler for the estimation of the model parameters. integrate the parameters before deriving the Gibbs sampler, thereby using an uncollapsed Gibbs sampler. 19 0 obj Pritchard and Stephens (2000) originally proposed the idea of solving population genetics problem with three-level hierarchical model. $a09nI9lykl[7 Uj@[6}Je'`R Following is the url of the paper: Example: I am creating a document generator to mimic other documents that have topics labeled for each word in the doc. Full code and result are available here (GitHub). `,k[.MjK#cp:/r Moreover, a growing number of applications require that . The intent of this section is not aimed at delving into different methods of parameter estimation for \(\alpha\) and \(\beta\), but to give a general understanding of how those values effect your model. >> \end{aligned} Gibbs sampling from 10,000 feet 5:28. p(z_{i}|z_{\neg i}, w) &= {p(w,z)\over {p(w,z_{\neg i})}} = {p(z)\over p(z_{\neg i})}{p(w|z)\over p(w_{\neg i}|z_{\neg i})p(w_{i})}\\ This is were LDA for inference comes into play. LDA with known Observation Distribution - Online Bayesian Learning in How can this new ban on drag possibly be considered constitutional? \[ stream \begin{equation} viqW@JFF!"U# Metropolis and Gibbs Sampling. which are marginalized versions of the first and second term of the last equation, respectively. PDF Chapter 5 - Gibbs Sampling - University of Oxford The les you need to edit are stdgibbs logjoint, stdgibbs update, colgibbs logjoint,colgibbs update. \tag{6.4} /Matrix [1 0 0 1 0 0] Sample $x_n^{(t+1)}$ from $p(x_n|x_1^{(t+1)},\cdots,x_{n-1}^{(t+1)})$. You will be able to implement a Gibbs sampler for LDA by the end of the module. PDF Relationship between Gibbs sampling and mean-eld "IY!dn=G Often, obtaining these full conditionals is not possible, in which case a full Gibbs sampler is not implementable to begin with. xP( The length of each document is determined by a Poisson distribution with an average document length of 10. (PDF) ET-LDA: Joint Topic Modeling for Aligning Events and their Update $\theta^{(t+1)}$ with a sample from $\theta_d|\mathbf{w},\mathbf{z}^{(t)} \sim \mathcal{D}_k(\alpha^{(t)}+\mathbf{m}_d)$. endobj hyperparameters) for all words and topics. xP( << &= \int \int p(\phi|\beta)p(\theta|\alpha)p(z|\theta)p(w|\phi_{z})d\theta d\phi \\ You can see the following two terms also follow this trend. xP( endstream endobj To clarify the contraints of the model will be: This next example is going to be very similar, but it now allows for varying document length. $\theta_{di}$). endobj In particular we are interested in estimating the probability of topic (z) for a given word (w) (and our prior assumptions, i.e. lda - Question about "Gibbs Sampler Derivation for Latent Dirichlet The only difference between this and (vanilla) LDA that I covered so far is that $\beta$ is considered a Dirichlet random variable here. . Gibbs sampler, as introduced to the statistics literature by Gelfand and Smith (1990), is one of the most popular implementations within this class of Monte Carlo methods. _(:g\/?7z-{>jS?oq#%88K=!&t&,]\k /m681~r5>. To solve this problem we will be working under the assumption that the documents were generated using a generative model similar to the ones in the previous section. \begin{equation} %%EOF \begin{equation} /Filter /FlateDecode We describe an efcient col-lapsed Gibbs sampler for inference. xref /Matrix [1 0 0 1 0 0] Let. endstream \begin{equation} 0000013825 00000 n stream 2.Sample ;2;2 p( ;2;2j ). << Sample $x_2^{(t+1)}$ from $p(x_2|x_1^{(t+1)}, x_3^{(t)},\cdots,x_n^{(t)})$. \begin{equation} <<9D67D929890E9047B767128A47BF73E4>]/Prev 558839/XRefStm 1484>> /FormType 1 $\mathbf{w}_d=(w_{d1},\cdots,w_{dN})$: genotype of $d$-th individual at $N$ loci. 23 0 obj To estimate the intracktable posterior distribution, Pritchard and Stephens (2000) suggested using Gibbs sampling. + \alpha) \over B(\alpha)} Keywords: LDA, Spark, collapsed Gibbs sampling 1. 28 0 obj $C_{dj}^{DT}$ is the count of of topic $j$ assigned to some word token in document $d$ not including current instance $i$. Support the Analytics function in delivering insight to support the strategy and direction of the WFM Operations teams . Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, Latent Dirichlet Allocation Solution Example, How to compute the log-likelihood of the LDA model in vowpal wabbit, Latent Dirichlet allocation (LDA) in Spark, Debug a Latent Dirichlet Allocation implementation, How to implement Latent Dirichlet Allocation in regression analysis, Latent Dirichlet Allocation Implementation with Gensim. Suppose we want to sample from joint distribution $p(x_1,\cdots,x_n)$. derive a gibbs sampler for the lda model - naacphouston.org << /S /GoTo /D [33 0 R /Fit] >> n_{k,w}}d\phi_{k}\\ So, our main sampler will contain two simple sampling from these conditional distributions: \Gamma(n_{d,\neg i}^{k} + \alpha_{k}) These functions use a collapsed Gibbs sampler to fit three different models: latent Dirichlet allocation (LDA), the mixed-membership stochastic blockmodel (MMSB), and supervised LDA (sLDA). Do not update $\alpha^{(t+1)}$ if $\alpha\le0$. endobj From this we can infer \(\phi\) and \(\theta\). &\propto {\Gamma(n_{d,k} + \alpha_{k}) beta (\(\overrightarrow{\beta}\)) : In order to determine the value of \(\phi\), the word distirbution of a given topic, we sample from a dirichlet distribution using \(\overrightarrow{\beta}\) as the input parameter. By clicking Post Your Answer, you agree to our terms of service, privacy policy and cookie policy. 0000116158 00000 n PDF LDA FOR BIG DATA - Carnegie Mellon University \tag{6.1} 39 0 obj << A feature that makes Gibbs sampling unique is its restrictive context. The clustering model inherently assumes that data divide into disjoint sets, e.g., documents by topic. /Length 351 In population genetics setup, our notations are as follows: Generative process of genotype of $d$-th individual $\mathbf{w}_{d}$ with $k$ predefined populations described on the paper is a little different than that of Blei et al. In 2004, Gri ths and Steyvers [8] derived a Gibbs sampling algorithm for learning LDA. In-Depth Analysis Evaluate Topic Models: Latent Dirichlet Allocation (LDA) A step-by-step guide to building interpretable topic models Preface:This article aims to provide consolidated information on the underlying topic and is not to be considered as the original work. p(, , z | w, , ) = p(, , z, w | , ) p(w | , ) The left side of Equation (6.1) defines the following: Sample $\alpha$ from $\mathcal{N}(\alpha^{(t)}, \sigma_{\alpha^{(t)}}^{2})$ for some $\sigma_{\alpha^{(t)}}^2$. Key capability: estimate distribution of . \end{equation} \end{equation} Notice that we marginalized the target posterior over $\beta$ and $\theta$. $\theta_{di}$ is the probability that $d$-th individuals genome is originated from population $i$. Gibbs Sampler for Probit Model The data augmented sampler proposed by Albert and Chib proceeds by assigning a N p 0;T 1 0 prior to and de ning the posterior variance of as V = T 0 + X TX 1 Note that because Var (Z i) = 1, we can de ne V outside the Gibbs loop Next, we iterate through the following Gibbs steps: 1 For i = 1 ;:::;n, sample z i . the probability of each word in the vocabulary being generated if a given topic, z (z ranges from 1 to k), is selected. PDF Assignment 6 - Gatsby Computational Neuroscience Unit alpha (\(\overrightarrow{\alpha}\)) : In order to determine the value of \(\theta\), the topic distirbution of the document, we sample from a dirichlet distribution using \(\overrightarrow{\alpha}\) as the input parameter. << /S /GoTo /D [6 0 R /Fit ] >> /Type /XObject Implement of L-LDA Model (Labeled Latent Dirichlet Allocation Model The difference between the phonemes /p/ and /b/ in Japanese. In this paper, we address the issue of how different personalities interact in Twitter. p(\theta, \phi, z|w, \alpha, \beta) = {p(\theta, \phi, z, w|\alpha, \beta) \over p(w|\alpha, \beta)} I am reading a document about "Gibbs Sampler Derivation for Latent Dirichlet Allocation" by Arjun Mukherjee. 0000005869 00000 n /Matrix [1 0 0 1 0 0] /FormType 1 /Resources 11 0 R One-hot encoded so that $w_n^i=1$ and $w_n^j=0, \forall j\ne i$ for one $i\in V$. &={1\over B(\alpha)} \int \prod_{k}\theta_{d,k}^{n_{d,k} + \alpha k} \\ endobj Labeled LDA can directly learn topics (tags) correspondences. Details. Summary. natural language processing >> How the denominator of this step is derived? Topic modeling is a branch of unsupervised natural language processing which is used to represent a text document with the help of several topics, that can best explain the underlying information. endobj /Shading << /Sh << /ShadingType 3 /ColorSpace /DeviceRGB /Domain [0.0 50.00064] /Coords [50.00064 50.00064 0.0 50.00064 50.00064 50.00064] /Function << /FunctionType 3 /Domain [0.0 50.00064] /Functions [ << /FunctionType 2 /Domain [0.0 50.00064] /C0 [1 1 1] /C1 [1 1 1] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [1 1 1] /C1 [0 0 0] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [0 0 0] /C1 [0 0 0] /N 1 >> ] /Bounds [ 21.25026 25.00032] /Encode [0 1 0 1 0 1] >> /Extend [true false] >> >> stream /Shading << /Sh << /ShadingType 3 /ColorSpace /DeviceRGB /Domain [0.0 50.00064] /Coords [50.00064 50.00064 0.0 50.00064 50.00064 50.00064] /Function << /FunctionType 3 /Domain [0.0 50.00064] /Functions [ << /FunctionType 2 /Domain [0.0 50.00064] /C0 [0 0 0] /C1 [0 0 0] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [0 0 0] /C1 [1 1 1] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [1 1 1] /C1 [0 0 0] /N 1 >> << /FunctionType 2 /Domain [0.0 50.00064] /C0 [0 0 0] /C1 [0 0 0] /N 1 >> ] /Bounds [ 21.25026 23.12529 25.00032] /Encode [0 1 0 1 0 1 0 1] >> /Extend [true false] >> >>

Chicago Crime Statistics 2022, Articles D